title
stringlengths 10
147
| abstract
stringlengths 3
2.41k
| tldr_text
stringlengths 96
425
⌀ | content_markdown
stringlengths 0
464k
⌀ | authors
sequencelengths 1
41
⌀ | date
timestamp[ms] | publish_info
stringclasses 111
values | publish_is_top
bool 2
classes | citation_count
uint32 0
1k
| citation_count_filtered_math_and_top_conf
uint32 0
127
| theorem_provers
sequencelengths 1
4
⌀ | url
stringlengths 31
152
⌀ | arxiv_url
stringlengths 32
32
⌀ | semantics_scholar_url
stringlengths 78
78
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Math-PUMA: Progressive Upward Multimodal Alignment to Enhance Mathematical Reasoning | Multimodal Large Language Models (MLLMs) excel in solving text-based mathematical problems, but they struggle with mathematical diagrams since they are primarily trained on natural scene images. For humans, visual aids generally enhance problem-solving, but MLLMs perform worse as information shifts from textual to visual modality. This decline is mainly due to their shortcomings in aligning images and text. To tackle aforementioned challenges, we propose Math-PUMA, a methodology focused on Progressive Upward Multimodal Alignment. This approach is designed to improve the mathematical reasoning skills of MLLMs through a three-stage training process, with the second stage being the critical alignment stage. We first enhance the language model's mathematical reasoning capabilities with extensive set of textual mathematical problems. We then construct a multimodal dataset with varying degrees of textual and visual information, creating data pairs by presenting each problem in at least two forms. By leveraging the Kullback-Leibler (KL) divergence of next-token prediction distributions to align visual and textual modalities, consistent problem-solving abilities are ensured. Finally, we utilize multimodal instruction tuning for MLLMs with high-quality multimodal data. Experimental results on multiple mathematical reasoning benchmarks demonstrate that the MLLMs trained with Math-PUMA surpass most open-source MLLMs. Our approach effectively narrows the performance gap for problems presented in different modalities. The code and data are available at: \url{https://github.com/wwzhuang01/Math-PUMA}. | This approach is designed to improve the mathematical reasoning skills of MLLMs through a three-stage training process, with the second stage being the critical alignment stage and effectively narrows the performance gap for problems presented in different modalities. | ## Math-PUMA: Progressive Upward Multimodal Alignment to Enhance Mathematical Reasoning
**Wenwen Zhuang[1*], Xin Huang[2*], Xiantao Zhang[3*], Jin Zeng[1]**
1University of Chinese Academy of Sciences
2Beijing Institute of Technology 3Beihang University
[email protected], [email protected], [email protected]
**Abstract**
Multimodal Large Language Models (MLLMs) excel in solving text-based mathematical problems, but they struggle with
mathematical diagrams since they are primarily trained on
natural scene images. For humans, visual aids generally enhance problem-solving, but MLLMs perform worse as information shifts from textual to visual modality. This decline is mainly due to their shortcomings in aligning images and text. To tackle aforementioned challenges, we propose Math-PUMA, a methodology focused on Progressive
**Upward Multimodal Alignment. This approach is designed**
to improve the mathematical reasoning skills of MLLMs
through a three-stage training process, with the second stage
being the critical alignment stage. We first enhance the language model’s mathematical reasoning capabilities with extensive set of textual mathematical problems. We then construct a multimodal dataset with varying degrees of textual and visual information, creating data pairs by presenting each problem in at least two forms. By leveraging the
Kullback-Leibler (KL) divergence of next-token prediction
distributions to align visual and textual modalities, consistent
problem-solving abilities are ensured. Finally, we utilize multimodal instruction tuning for MLLMs with high-quality multimodal data. Experimental results on multiple mathematical
reasoning benchmarks demonstrate that the MLLMs trained
with Math-PUMA surpass most open-source MLLMs. Our
approach effectively narrows the performance gap for problems presented in different modalities.
**Text-only** **Text-dominant** **Vision-dominant**
_In triangle ABC, it is known that angle_ _In triangle ABC, it is_ _As shown in the fig-_
_A = 80AB and point E is on ACBC, then the size of angle CED is ().°, angle B = 60°, point D is on, DE parallel_ _known that 80DE parallel BC, then the size of angle CED °, angle B = 60angle A = °,_ _ure, DE parallel BC, then the size of angle CED is ()._
_Choices: A:40° B:60° C:120° D:140°_ _is (). Choices: ..._ _Choices: …_
_Given that … Therefore, the_ _Given that … Therefore, the_ _Given that … Therefore, the_
_size of angle CED is D:140°_ _size of angle CED is A:40°_ _size of angle CED is A:40°_
Figure 1: (Top) Three examples of GPT-4o solving multimodal math problems. These examples represent different
modalities of the same question. (Bottom) Results of several
open-source MLLMs and human on five different tasks of
MATHVERSE (Zhang et al. 2024).
els are applied to mathematical diagrams, thereby resulting
in inferior performance.
For humans, regardless of the modality in which information is presented, problems with equivalent amounts of
information tend to have similar levels of difficulty. Furthermore, incorporating images into problem-solving tasks can
enhance human comprehension and resolution abilities. As
illustrated in Figure 1, an increase in visual data often correlates with a decline in the efficacy of most MLLMs. Additionally, there is a notable disparity in effectiveness between text-centric and exclusively visual problems. For example, GPT-4o (OpenAI 2024b) demonstrates strong proficiency in solving text-only mathematical problems, but its
effectiveness diminishes progressively as the modality transitions from textual to visual. This reduction in capability
primarily stems from the current models’ inadequate alignment between visual and textual data, which impairs their
overall functionality.
**Introduction**
Large Language Models (LLMs) have demonstrated remarkable reasoning capabilities, particularly when tackling
mathematical problems in textual form (Wei et al. 2022;
Chen et al. 2022; Gou et al. 2023; Yu et al. 2023; Shao
et al. 2024). However, Multimodal Large Language Models
(MLLMs) face greater challenges when tackling problems
that involve images. These models need to not only interpret textual information but also comprehend mathematical
diagrams and identify details crucial for solving problems.
Although MLLMs have exhibited notable efficacy in general visual question answering (Radford et al. 2021; Li et al.
2022; Liu et al. 2023), their training predominantly relies on
datasets comprising natural scene images. This reliance engenders a substantial domain discrepancy when these mod
*These authors contributed equally.
-----
To address this issue, we propose Math-PUMA, a methodology centered around Progressive Upward Multimodal
**Alignment (PUMA), aimed at enhancing the mathematical**
reasoning capabilities of MLLMs. Our approach is structured into three distinct stages, with stage 2 serving as the
pivotal alignment phase. (1) Stage 1: We train the LLM
using a substantial dataset of text-based math problems to
enhance its problem-solving capabilities. This phase capitalizes on the extensive availability of text-based math
problem-solving data. (2) Stage 2: Through data augmentation leveraging publicly available sources and automated
data generation, we have constructed 692K data pairs. Each
pair conveys identical information but differs in its multimodal representation. We utilize the KL-divergence of nexttoken prediction distributions between text-rich and visionrich problems. This approach ensures the model maintains
consistent performance across different modalities, thereby
achieving modal alignment and improving its ability to
tackle multimodal mathematical problems. (3) Stage 3: We
select 996K high-quality multimodal problem-solving data
to fine-tune the model, further enhancing its performance in
multimodal mathematical problem-solving tasks.
The contributions of this paper are three-fold:
- We curate a large-scale dataset, Math-PUMA-1M, which
comprises 692K data pairs and 996K multimodal mathematical data. This dataset serves as a valuable resource
for model training.
- We propose Math-PUMA, a methodology based on
Progressive Upward Multimodal Alignment, which enhances mathematical reasoning in MLLMs through a
three-stage process.
- Experimental results on three widely-used benchmarks
demonstrate that the MLLMs trained with Math-PUMA
outperforms most open-source models. Notably, our approach effectively narrows the performance gap for problems that contain the same information but are presented
in different modalities, as evidenced by results on MATHVERSE.
**Related Work**
**Multimodal Large Language Models**
The exploration of Multimodal Large Language Models
(MLLMs) has been inspired by advancements in Large Language Models (LLMs), resulting in remarkable capabilities across a variety of tasks that require both visual and
linguistic understanding. CLIP (Radford et al. 2021) is a
breakthrough model that learns transferable visual representations from natural language supervision. LLaVA series
(Liu et al. 2023, 2024a) pioneer visual instruction tuning for
LLMs, employing a simple MLP as a projector to connect
the vision encoder with the language model. Models such
as Qwen-VL (Bai et al. 2023) and Deepseek-VL (Lu et al.
2024a) introduce a new visual receptor or a hybrid vision
encoder, significantly enhancing their ability to perceive and
understand visual inputs. However, despite these significant
strides, MLLMs still face considerable challenges, particularly in multimodal mathematical reasoning. This is primarily due to the substantial domain gap between the natural
scene image and the abstract mathematical graphics. There
is a pressing need to enhance the understanding and reasoning abilities of MLLMs in relation to mathematical diagrams.
**Multimodal Mathematical Reasoning**
The advancement of MLLMs has driven significant research
into multimodal reasoning. Current efforts are primarily
centered on data augmentation to improve models’ performance. Significant efforts have been invested in augmenting text-only mathematical problem-solving data to enhance
LLMs’ reasoning capabilities (Saxton et al. 2019; Yu et al.
2023; Liu and Yao 2024). G-LLaVA (Gao et al. 2023a)
and Math-LLaVA (Shi et al. 2024) improve multimodal
mathematical reasoning by constructing the Geo170K and
MathV360K datasets, respectively. These datasets are created by generating additional questions for images sourced
from public datasets. However, they only serve to expand
the text, without increasing the diversity of images in the
dataset. GeoGPT4V (Cai et al. 2024) leverages GPT-4V
(OpenAI 2023b) to generate new problems and images
based on existing datasets, creating a dataset of 4.9K geometric problems combined with 19K open-source data. Nevertheless, due to GPT-4V’s subpar capability in generating
code from image descriptions, the quality of the generated
data is comparatively inferior. By comparison, our work not
only makes new advancements in data augmentation, including text rephrasing and the generation of high-quality images, but also introduces a novel alignment method used for
training.
**Methodology**
**Data Construction**
In order to refine alignment between visual and textual
modalities, we need to construct data pairs. We clarify a
**“data pair” as a set of two data components, which share**
an equivalent level of information within each problem context, and their solutions are identical. A “data pair” is defined as two data components that contain equivalent information within the same problem context, and their solutions
are identical. However, the distribution of information across
different modalities may vary within each pair. We use the
term “vision-rich” to describe the data component where
the visual modality has a higher proportion of information,
whereas “text-rich” refers to the component with a higher
proportion of textual information.
The methods we employ to construct data pairs include
automatic data generation and data augmentation based on
publicly available sources.
**Automatic Data Generation** We implement an automatic
data generation pipeline for three categories of mathematical problems: plane geometry, solid geometry, and functions. The pipeline consists of four agents: (1) Question
**Designer, responsible for formulating problems and assign-**
ing information to visual and textual modalities; (2) Plotter,
which generates diagrams; (3) Solver, which provides answers and explanations; and (4) Pair Constructor, which
-----
Constructor can obtain up to four types of data with
the same information but different modalities, including:
vision-only, vision-dominant, text-dominant, and textonly. Two are randomly selected to form a data pair, with
the one containing more information in the visual modality being classified as vision-rich, and the other as textrich.
We generated 40K data each for plane geometry, solid geometry, and functions, summing up to 120K.
**Data Augmentation** We collected 80K mathematical
problem-solving data from online sources. By rephrasing
the problems from multiple perspectives (Yu et al. 2023),
and applying a series of traditional image processing techniques such as scaling, stretching, and gamma transformation, we ultimately expanded the dataset to 310K data. Additionally, we utilized the VisualWebInstruct dataset (TIGERLab 2024) containing 262K data. To automate the construction of data pairs, we employed a straightforward text-toimage rendering process to transition the content from text
to visual form. The original data serve as the text-rich component, while the generated data form the vision-rich component. In total, we obtained 572K data pairs.
**Training Stages**
We employ a three-stage pipeline to train our model MathPUMA, with specific details shown in Figure 3.
**Stage 1: Enhancing the Language Model’s Mathemat-**
**ical Reasoning Abilities** Considering the abundance of
unsupervised text-based mathematical training corpora and
problem-solving data (Shao et al. 2024), in contrast to the
scarcity of high-quality multimodal mathematical problemsolving data, we initially train the language model on a
large number of text-based math problems to enhance its
mathematical reasoning capabilities. Given that some models (Shao et al. 2024; Yang et al. 2024) have been thoroughly
trained and have demonstrated superior performance in solving mathematical problems, we respectively use them to initialize our LLM. Subsequently, we extracted 200k data from
these datasets (Yue et al. 2023; Tong et al. 2024; Mitra et al.
2024; LI et al. 2024) to fine-tune the model. Through this
phase, the language model’s mathematical reasoning abilities are substantially improved.
**Stage 2: Progressive Upward Multimodal Alignment**
We observe that the multimodal mathematical reasoning
ability of MLLMs resembles a pyramid: from bottom to top,
the performance of MLLMs progressively declines while
the information in the text modality gradually shifts to the
visual modality. To bridge the performance gap between
text-rich and vision-rich mathematical reasoning, we propose PUMA, a Progressive Upward Multimodal Alignment
method. PUMA aims to enable MLLMs to tackle vision-rich
mathematical problems by aligning their outputs with textrich data, thereby gradually enhancing the reasoning capability of MLLMs on mathematical problems.
Let i = 0, 1, 2, 3 represents the levels of capability for
MLLMs, ranging from weak to strong (top-down). For a visual mathematical problem, the inference results of MLLMs
**_Visual Modality_** **_Textual Modality_** **Plotter**
**Question Designer**
…
**_Textual info._** **_Visual info._**
Find 𝐵𝐶. A triangle with …
𝐴𝐵= 2, 𝐴𝐶= 1,
∠𝐴= 60°.
…
Calculate A cylinder with
_random_ the volume. ℎ= 3, 𝑟= 1.
**Solver**
Determine A linear function Question Rationale
the function. through points _re-solve_
0, 1, 2, 4 .
Answer Extracted
_verify_ answer
**Pair Constructor**
**_Vision-rich_** **_Text-rich_**
**Vision-only** **Vision-dom.** **Text-dom.** **Text-only**
_Randomly generate a text-rich/vision-rich data pair_
Figure 2: The pipeline of automatic data generation.
produces four types of data and randomly selects two to
form a data pair. Figure 2 illustrates this automatic data generation process.
- Question Design: The Question Designer randomly selects a type of mathematical problem and generates a
specific problem based on the selected type. It also randomly selects the information carrier, deciding whether
to present the information in text or in an image. This determines the visual information sent to the Plotter and the
textual information sent to the Solver.
- Plotting: Based on the received visual information, the
Plotter uses our predefined basic tools to draw diagrams.
- Problem Solving: The Solver calculates the answer
based on the text-only version of the question provided
in the received textual information, as this format contains complete problem information. Since the calculation process is executed through a program, the answer
is deterministic and reliable. Considering that MLLMs
can obtain stronger reasoning abilities from step-by-step
solutions, the Solver generates a detailed explanation for
each problem by calling GPT-4o-mini (OpenAI 2024a)
and verifying the generated explanations against the standard answer to ensure accuracy.
- Pair Construction: Combining the diagram output from
the Plotter and the text output from the Solver, the Pair
-----
_trainable_
**Stage-1: Enhancing LLM** _frozen_
**Math-PUMA**
**LLM** **A.** **V** **Vision**
**E** **Encoder**
**_Textual math corpora_**
**Stage-2:**
**Adapter**
𝐴𝐵 and 𝐴𝐶 are two chords of Calculate the degree
the circle ⊙𝑂. 𝑂𝐷 is per- measure of ∠𝐵𝑂𝐶.
pendicular to 𝐴𝐵 at point 𝐷,
**Math-PUMA** and 𝑂𝐸 is perpendicular to
**LLM** **A.** **V** 𝐴𝐶and at point 𝑂𝐶. Given that 𝐸. Connect ∠𝐷𝑂𝐸=𝑂𝐵 **Large Language**
**E** 130[∘], calculate the degree
**_PUMA data_** measure of ∠𝐵𝑂𝐶. **Model**
**_Strong logits_** **_Weak logits_**
**Stage-3: Multimodal Instruction Tuning**
**Math-PUMA** 𝓛𝐅𝐊𝐋, 𝓛𝐑𝐊𝐋 𝓛𝐡𝐚𝐫𝐝
_Strong-level flow (w/o grad)_
**LLM** **A.** **V** _Weak-level flow (w grad)_
**E** _Loss calculation_ **Labels**
**_Multimodal data_**
Figure 3: Overview of the PUMA approach. (left) The three stage training process of Math-PUMA. (right) The details for aligning data pair. The input data pair includes text-rich data at the strong level and vision-rich data at the weak level, simultaneously
processed by the MLLM. The strong logits and labels are used to supervise the weak logits.
are progressively inferior on the i-th level compared to the
(i + 1)-th level. We denote the response distribution (logits)
obtained by MLLMs when processing the input of i-th level
as pi, while the response distribution (logits) obtained on
the input of (i + 1)-th level is denoted as pi+1. The forward
KL (FKL) divergence and reverse KL (RKL) divergence between these distributions are calculated, since they converge
to the same objective after a sufficient number of epochs for
MLLMs (Wu et al. 2024).
Let y[(][i][)] = _{yt[(][i][)][}]t[T]=1_ [represent the response gen-]
erated by MLLMs based on input x[(][i][)]. Here, yt[(][i][)]
_∈_
_{size.Y1[(][i] p[)][, Y]i and2[ (][i][)][, ..., Y] pi+1V[ (] represent the distributions of weak and[i][)][}][, with][ V][ representing the vocabulary]_
strong levels, z[(][i][)] = (z1[(][i][)][, z]2[(][i][)][, ..., z]V[(][i][)][)][ and][ z][(][i][+1)] =
(z1[(][i][+1)], z2[(][i][+1)], ..., zV[(][i][+1)]) represent the logits of weak and
strong levels, respectively. The FKL divergence and RKL
divergence are computed as follows:
with
exp (zj[(][i][)][/τ] [)]
_pi(Yj[(][i][)]_ **y<t[(][i][)][) =]** _V_ _,_ (3)
_|_
_k=1_ [exp(][z]k[(][i][)][/τ] [)]
where τ represents the temperature hyperparameter.
P
Furthermore, to maintain training stability, we calculate
a hard loss by utilizing the solutions of mathematical problems as the ground truth labels, i.e.,
hard =
_L_ _−_ _TV[1]_
=
_−_ _TV[1]_
_t=1_ log pi(yt[(][i][)][|][x][(][i][)][,][ y]<t[(][i][)][)]
X
(4)
log pi(Yj[(][i][)] **x[(][i][)], y<t[(][i][)][)][.]**
_|_
_j=1_
X
_t=1_
Finally, the total loss is computed as
_L = λKL(αKLLFKL + (1 −_ _αKL)LRKL)τ_ [2] + (1 − _λKL)Lhard, (5)_
where λKL is a hyperparameter that balances the weight between the combined FKL and RKL divergences and the hard
loss term, αKL is a weight hyperparameter that balances the
contribution between LFKL and LRKL. The purpose of multiplying KL by τ [2] is to equalize the gradients of the two
losses.
At this stage, we use a total of 692K data pairs for training, which includes 120K data pairs automatically generated
and 572K data pairs obtained through data augmentation
based on publicly available data as described in Data Construction.
KL _pi(yt[(][i][)]_ _<t[)][||][p][i][+1][(][y]t[(][i][+1)]_ **y<t[(][i][+1)]**
_t=1_ _[|][y][(][i][)]_ _|_
X
_LFKL =_
=
_LRKL =_
_TV_
1
_TV_
1
_TV_
1
_TV_
(1)
(2)
_V_ _pi(Yj[(][i][)]_ **y<t[(][i][)][)]**
_pi(Yj[(][i][)]_ **y<t[(][i][)][) log]** _|_
_j=1_ _|_ _pi+1(Yj[(][i][+1)]_ **y<t[(][i][+1)]**
X _|_
_t=1_
_t=1_ KL _pi+1(yt[(][i][+1)]|y<t[(][i][+1)])||pi(yt[(][i][)][|][y]<t[(][i][)][)]_
X
_V_ _pi+1(Yj[(][i][+1)]_ **y<t[(][i][+1)]**
_pi(Yj[(][i][+1)]_ **y<t[(][i][+1)]) log** _|_
_j=1_ _|_ _pi(Yj[(][i][)]_ **y<t[(][i][)][)]**
X _|_
_t=1_
-----
**Stage 3: Multimodal Instruction Tuning** In the final
phase, we enhance the model’s reasoning capabilities by incorporating multimodal problem-solving data. Initially, we
retained the majority of the high-quality data used in Stage 2,
while augmenting our dataset with the MathV360K dataset
(Shi et al. 2024). Specifically, we focused on enriching the
geometric problem subset within the MathV360K dataset,
expanding it from 40K to 120K data to address the scarcity
of geometric data. Additionally, we integrated a balanced
amount of textual data to prevent any modal imbalance between text and visual modalities. All data included detailed
reasoning processes to guide the model’s understanding and
learning.
Ultimately, we compiled a large-scale instruction tuning
dataset, comprising a total of 996K data. This multimodal
instruction tuning not only bolsters the model’s ability to
reason and solve problems but also ensures that it can effectively leverage both textual and visual information for enhanced performance in mathematical problem-solving.
**Experiments**
**Experimental Setup**
**Models** We validate the effectiveness of our method across
various base models and scales, including DeepSeek-Math7B (Shao et al. 2024), Qwen2-1.5B and Qwen2-7B(Yang
et al. 2024), chosen as the LLMs for Math-PUMA. To ensure the compatibility with DeepSeek-Math and DeepSeekVL (Lu et al. 2024a), we adhere to the architecture of
DeepSeek-VL. For Qwen2, we adopt a similar architecture
to LLaVA, with the visual encoder designated as SigLIPso400m-patch14-384 (Zhai et al. 2023).
**Benchmarks** We conduct extensive experiments on three
popular multimodal mathematical problem-solving benchmarks: MATHVERSE (Zhang et al. 2024), MATHVISTA (Lu
et al. 2024b), and WE-MATH (Qiao et al. 2024). MATHVERSE evaluates the multimodal mathematical reasoning
abilities of MLLMs under five different conditions. MATHVISTA comprises samples that require fine-grained, in-depth
visual understanding and compositional reasoning, posing a
challenge for all baseline models on this benchmark. WEMATH is the first benchmark specifically designed to explore the problem-solving principles beyond the end-to-end
performance.
**Evaluation and Metrics** For MATHVERSE and MATHVISTA, we adopt the official implementation. Initially, we
use GPT-4o-mini (OpenAI 2024a) to extract answers from
the responses generated by MLLMs. Subsequently, we employ GPT-4o-mini (OpenAI 2024a) once more to verify the
correctness of the extracted answers. The prompts used for
answer extraction and correctness assessment are kept consistent with the official implementation. Ultimately, we calculate the accuracy scores as the evaluation metric. For WEMATH, we select the average and Rote Memorization (RM)
scores as evaluation metrics.
**Implementation Details** Our experiments are conducted
using PyTorch version 2.1.0 and CUDA 12.1, utilizing 32
NVIDIA A100 GPUs with 80GB memory each. The training process is divided into three stages, each with specific hyperparameters and configurations. We employ the
AdamW optimizer (Kingma and Ba 2014) with β1 = 0.9
and β2 = 0.999. The learning rate is adjusted across three
stages: 3 × 10[−][5] for stage 1, 5 × 10[−][5] for stage 2, and
3 × 10[−][5] for stage 3. A cosine learning rate schedule is implemented with a warm-up phase covering 2% of the total
training steps. Additionally, a decay rate of 0.1 is applied.
The KL divergence is controlled using specific hyperparameters: αKL is set to 0.2, τ to 1.0, and λKL to 0.1. The training
is conducted over 1 epoch. The batch sizes for three stages
are 256, 512, and 256, respectively.
**Performance Comparison**
**Comparison on MATHVERSE** MATHVERSE is capable
of clearly demonstrating the gap between visual and textual modalities. From Table 1, it can be observed that the
MLLMs trained by Math-PUMA achieve the state-of-theart (SOTA) among open-source MLLMs. Compared to the
previous SOTA method, Math-LLaVA, the MLLMs trained
by Math-PUMA exhibit accuracy scores improvement about
10%. When compared to the closed-source GPT-4V (OpenAI 2023b), Math-PUMA-Qwen2-7B performs competitively with only a gap of 6.5%, demonstrating the effectiveness of Math-PUMA.
**Comparison on MATHVISTA** MATHVISTA is a comprehensive benchmark designed to evaluate mathematical reasoning. According to the results presented in Table 1, Math-PUMA-Qwen2-7B demonstrates SOTA performance in GPS, ALG, GEO and SCI domains among opensource MLLMs of the same scale. It outperforms InternLMXComposer2-VL (Dong et al. 2024) by significant margins,
with accuracy improvements of 16.4%, 15.7%, 16.8%, and
10.7% in these respective domains.
**Comparison on WE-MATH** WE-MATH places strong
emphasis on the importance of the mathematical reasoning
process. Table 2 demonstrates that Math-PUMA-Qwen27B achieves SOTA performance in average scores among
open-source MLLMs with approximate 10B parameters,
surpassing InternLM-XComposer2-VL (Dong et al. 2024).
Notably, even among open-source MLLMs with parameters exceeding 20B, Math-PUMA-Qwen2-7B outperforms
LLaVA-NeXT (Liu et al. 2024b) 72B model, reaching the
performance of LLaVA-NeXT 110B model. While MathPUMA surpasses Qwen-VL-Max (Bai et al. 2023) among
closed-source models, there remains a significant gap compared to GPT-4V (OpenAI 2023b) and GPT-4o (OpenAI
2024b).
**Ablation Study**
We conduct ablation studies on MATHVERSE to showcase
the effectiveness of each stage and to explore the influence
of sequential orders on Math-PUMA.
**The Role of Each Stage** We design three ablation experiments to demonstrate the effectiveness of each stage by removing stage 1, 2, and 3 separately. Subsequently, we evalu
-----
Table 1: Mathematical evaluation on MATHVERSE and MATHVISTA testmini sets. For MATHVERSE, we calculate the
“ALL” score without averaging the “Text-only” version. For MATHVISTA, we select 4 mathematical categories from the original 12 categories. ALL: overall accuracy across original categories; GPS: geometry problem solving; ALG: algebraic reasoning;
GEO: geometry reasoning; SCI: scientific reasoning. For closed-source and open-source MLLMs, the best accuracy scores are
marked in bold fonts, while the second best accuracy scores are marked in underline fonts, respectively.
|MATHVERSE MATHVISTA Model # Params. Text- Text- Vision- Vision- Vision- ALL ↑ dom. ↑ lite ↑ int. ↑ dom. ↑ only ↑ ALL ↑ GPS ↑ ALG ↑ GEO ↑ SCI ↑|MATHVERSE|Col3|Col4|Col5|Col6|Col7|MATHVISTA|Col9|Col10|Col11|Col12|
|---|---|---|---|---|---|---|---|---|---|---|---|
||ALL ↑|Text- dom. ↑|Text- lite ↑|Vision- int. ↑|Vision- dom. ↑|Vision- only ↑|ALL ↑|GPS ↑|ALG ↑|GEO ↑|SCI ↑|
_Baselines_
|Random chance Human performance|- -|12.4 64.9|12.4 71.2|12.4 70.9|12.4 61.4|12.4 68.3|12.4 66.7|17.9 60.3|21.6 48.4|21.7 50.9|20.1 51.4|17.2 64.9|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
_Close-source LLMs_
|ChatGPT (Ouyang et al. 2022) GPT-4 (OpenAI 2023a)|- -|- -|33.3 46.5|18.9 20.7|- -|- -|- -|33.2 33.2|29.3 31.7|31.0 33.5|31.0 32.2|50.8 58.2|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
_Closed-source MLLMs_
|Qwen-VL-Plus (Bai et al. 2023) Gemini-1.0-Pro (Gemini Team 2023) Qwen-VL-Max (Bai et al. 2023) GPT-4V (OpenAI 2023b)|- - - -|11.8 22.3 24.8 38.3|15.7 27.6 30.3 52.1|11.1 23.7 24.8 40.9|9.0 19.4 20.6 34.9|13.0 20.3 23.3 33.6|10.0 20.5 25.1 29.8|43.3 45.2 - 49.9|38.5 40.4 - 50.5|39.1 45.2 - 53.0|39.3 41.0 - 51.0|59.0 54.9 - 63.1|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
_Open-source MLLMs_
|mPLUG-Owl2 (Ye et al. 2024) LLaMA-Adapter-V2 (Gao et al. 2023b) LLaVA-1.5 (Liu et al. 2024a) LLaVA-NeXT (Liu et al. 2024b) MiniGPT-v2 (Chen et al. 2023a) SPHINX-Plus (Gao et al. 2024) ShareGPT4V (Chen et al. 2023b) InternLM-XC2. (Dong et al. 2024) G-LLaVA (Gao et al. 2023a) SPHINX-MoE (Gao et al. 2024) DeepSeek-VL (Lu et al. 2024a) Math-LLaVA (Shi et al. 2024)|7B 7B 13B 8B 7B 13B 13B 7B 7B 8×7B 7B 13B|4.6 5.7 7.6 10.3 11.0 12.2 13.1 16.3 16.6 16.8 19.3 22.9|6.6 6.2 8.8 12.8 12.1 13.9 16.2 20.2 20.9 26.2 23.0 27.3|6.3 5.9 7.6 12.0 12.0 11.6 16.2 14.3 20.7 17.4 23.2 24.9|6.3 6.1 7.4 10.7 13.1 11.6 15.5 14.2 17.2 16.7 20.2 24.5|5.6 4.2 7.4 9.7 10.3 13.5 13.8 17.5 14.6 12.5 18.4 21.7|4.9 6.1 6.9 6.3 7.4 10.4 3.7 15.2 9.4 11.1 11.8 16.1|22.2 23.9 25.7 34.6 23.1 36.8 27.5 47.8 23.8 42.3 34.9 38.3|23.6 25.5 18.3 - 26.0 - 27.4 31.7 38.9 31.2 28.4 29.3|23.6 26.3 19.6 - 28.1 - - 32.0 36.3 31.7 29.2 28.5|23.9 24.3 17.6 - 24.7 - 27.6 30.5 35.6 30.5 27.2 30.5|26.3 29.5 42.6 - 25.4 - - 37.7 20.5 50.8 35.3 42.6|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|Math-PUMA-Qwen2-1.5B Math-PUMA-Qwen2-7B Math-PUMA-DeepSeek-Math-7B|1.5B 7B 7B|29.6 33.6 31.8|35.8 42.1 43.4|32.2 35.0 35.4|31.3 33.4 33.6|30.4 31.6 31.6|18.5 26.0 14.7|44.5 47.9 44.7|47.6 48.1 39.9|43.4 47.7 39.2|47.3 47.3 41.4|41.0 42.6 48.4|
ate the accuracy scores of overall, text-dominant and visiononly scenarios, as well as the gap between them. The results
of the ablation experiments are presented in Table 3.
**Removing Stage 1: Stage 1 aims to enhance the mathe-**
matical reasoning capabilities of the LLMs. As observed in
Table 3, upon removing stage 1, there is a slight decrease in
the accuracy compared to the corresponding model trained
with all three stages. This reduction occurs because stage 1
serves as the foundation for stage 2. When the LLM lacks
strong mathematical reasoning capabilities, strong logits are
not reliable to supervise weak logits, resulting in lower performance. However, due to the presence of complete stage 2
and 3, the gap remains close to that of the complete threestage training model and relatively low.
**Removing Stage 2: Stage 2 embodies our devised**
PUMA, facilitating a close alignment between visual and
textual modalities. As depicted in Table 3, the absence of
stage 2 results in a wider gap in reasoning performance between textual and visual modalities when compared to the
three-stage approach. Nonetheless, with the enhancement
of mathematical reasoning capabilities by stage 1 and multimodal instruction tuning with high-quality data through
stage 3, the overall performance persists at a relatively high
level.
**Removing Stage 3: Stage 3 is multimodal instruction tun-**
ing. We observe that if only stage 1 and 2 are performed
without subsequent multimodal instruction tuning, MLLMs
tend to lose conversational capabilities to some extent. As
seen in Table 3, the performance of MLLMs drastically declines when stage 3 is excluded, primarily due to the loss of
conversational capabilities. Since we have conducted stage
2, the gap between textual and visual modalities remains relatively small.
**Sequential Order of Stages** We swap stage 2 and 3 to assess their impact on MLLMs. As shown in Table 3, exchanging stage 2 and 3 leads to a significant performance drop.
Our analysis of each stage reveals the critical role of stage 3
in maintaining the conversational abilities of MLLMs. Consequently, rearranging the stage 2 and 3 results in the loss of
conversational skills of MLLMs, thereby influencing their
overall performance. Nonetheless, the eventual implementation of stage 2 ensures that the gap between textual and
visual modalities remains relatively small.
**Have the modality gaps truly narrowed?** Through the
aforementioned analysis, we have demonstrated the efficacy
of our method. However, we still seek to provide a definitive conclusion to address the initial query: Has the perfor
-----
Table 2: Evaluation results on WE-MATH testmini set.
AVG: average score (strict); RM: rote memorization (strict).
The best scores of each category are marked in bold fonts.
|Model|# Params.|AVG ↑|RM ↓|
|---|---|---|---|
_Close-source MLLMs_
|Qwen-VL-Max (Bai et al. 2023) Gemini-1.5-Pro (Reid et al. 2024) GPT-4V (OpenAI 2023b) GPT-4o (OpenAI 2024b)|- - - -|10.5 26.4 31.1 42.9|75.5 54.8 47.9 34.2|
|---|---|---|---|
_Open-source MLLMs (≥20B)_
|InternVL-Chat-V1.5 (Chen et al. 2024) LLaVA-NeXT (Liu et al. 2024b) LLaVA-NeXT (Liu et al. 2024b)|26B 72B 110B|15.0 13.4 19.2|73.3 71.0 66.0|
|---|---|---|---|
_Open-source MLLMs (≈10B)_
1.4
1.2
1.0
0.8
0.6
0.4
0.2
0.0
40
35
30
25
20
15
10
|Text Vision Absolute Skewness Coefficient of Variation|Col2|Col3|
|---|---|---|
||||
ImageBind
LLM
LLaVA
Next(7B)
Share
GPT4V
G-LLaVA Gemini
1.0-Pro
Qwen-VL
Max
LLaVa
Next(34B)
Math-PUMA
(Qwen2-7B)
GPT-4V
Figure 4: Visualizing MLLMs’ Performance on MATHVERSE. “Text” represents the average scores for textdominant and text-lite categories, while “Vision” represents
the average scores for vision-intensive, vision-dominant,
and vision-only categories. “Absolute Skewness” and “Coefficient of Variance” denote the statistical measures of score
distribution across the five categories, with skewness taken
as an absolute value.
intuitively assess the model’s performance across these two
distinct modalities. Additionally, we compute the skewness
and coefficient of variation of the model’s scores on different types of questions in MATHVERSE to corroborate our
observations regarding the model’s modal balance.
As illustrated in Figure 4, we compare our model, trained
using our proposed method, with several popular MLLMs.
In terms of overall performance, our model attains high average scores on both text and image-based questions, outperforming closed-source MLLMs such as Gemini-1.0-Pro and
Qwen-VL-Max. We analyze the performance gap between
text and visual modalities. Our model maintains a high level
of performance while exhibiting a relatively smaller gap,
which is even less than that of GPT-4V. Additionally, regarding score distribution, a model that performs consistently
across modalities should demonstrate similar scores across
various types of questions in MATHVERSE. Such consistency is indicated by lower absolute skewness and coefficient of variation. By visualizing the score distributions of
multiple models, it is evident that our model exhibits low
levels of both skewness and coefficient of variation, indicating a well-balanced performance across different types.
In summary, our alignment method effectively mitigates the
performance disparity between different modalities.
|LLaVA-1.5 (Liu et al. 2024a) LLaVA-1.5 (Liu et al. 2024a) LLaVA-1.6 (Liu et al. 2024b) LLaVA-1.6 (Liu et al. 2024b) DeepSeek-VL (Lu et al. 2024a) G-LLaVA (Gao et al. 2023a) Math-LLaVA (Shi et al. 2024) InternLM-XC2. (Dong et al. 2024)|7B 13B 7B 13B 7B 13B 13B 7B|6.5 8.4 3.3 5.2 6.3 6.5 11.1 12.7|85.6 78.1 89.1 86.9 84.8 86.6 72.8 77.6|
|---|---|---|---|
|Math-PUMA-Qwen2-1.5B Math-PUMA-Qwen2-7B Math-PUMA-DeepSeek-Math|1.5B 7B 7B|10.4 19.2 15.6|75.5 67.8 67.4|
Table 3: Results of ablation study. Order: the sequential order of Stage 1, 2, and 3; ALL: overall accuracy; Text-dom.:
accuracy of text-dominant data; Vision-only: accuracy of
vision-only data; Gap: (Text-dom. − Vision-only) / Visiononly. The best scores of each LLM are marked in bold fonts.
|Order|LLM|ALL ↑|Text- dom. ↑|Vision- only ↑|Gap ↓|
|---|---|---|---|---|---|
_Standard pipeline_
Visiononly ↑
|1 →2 →3|Qwen2-1.5B Qwen2-7B DeepSeek-Math|29.6 33.6 31.8|35.8 42.1 43.4|18.5 26.0 14.7|93.5 61.9 195.2|
|---|---|---|---|---|---|
_Effectiveness of Stage 1 (Enhancing LLM)_
|2 →3|Qwen2-1.5B Qwen2-7B DeepSeek-Math|17.0 19.6 23.9|19.9 27.3 30.7|12.1 11.9 11.2|64.5 129.4 174.1|
|---|---|---|---|---|---|
_Effectiveness of Stage 2 (Math-PUMA)_
|1 →3|Qwen2-1.5B Qwen2-7B DeepSeek-Math|24.6 27.2 29.3|40.3 44.1 43.4|9.8 11.0 9.1|311.2 300.9 376.9|
|---|---|---|---|---|---|
_Effectiveness of Stage 3 (Multimodal instruction tuning)_
**Conclusion**
In this paper, we present Math-PUMA, a progressive upward multimodal alignment approach aimed at enhancing
the mathematical reasoning capabilities of MLLMs. Experimental results indicate that Math-PUMA MLLMs not only
achieve state-of-the-art performance among open-source
models on multiple mathematical benchmarks but also significantly reduce the performance gap between textual and
visual modalities. Despite the impressive results of MathPUMA, a undeniable disparity remains when compared
to human-level proficiency. Continued exploration in high
|1 →2|Qwen2-1.5B Qwen2-7B DeepSeek-Math|11.7 21.2 22.2|15.5 28.9 36.2|8.1 12.2 14.8|91.4 136.9 144.6|
|---|---|---|---|---|---|
_Sequential Order of Stages_
|1 →3 →2|Qwen2-1.5B Qwen2-7B DeepSeek-Math|24.5 26.7 23.4|38.2 34.4 34.3|12.1 18.7 4.3|215.7 84.0 697.7|
|---|---|---|---|---|---|
mance gap between different modalities truly narrowed? To
this end, we base our exploration on the evaluation metrics
provided by MATHVERSE, calculating the average scores of
the model on text-based questions and visual questions to
-----
quality data augmentation, automated data generation methods, and effective training strategies is necessary to further
advance the mathematical reasoning abilities of MLLMs.
We hope our work provides valuable insights and inspiration for future research in this domain.
**References**
Bai, J.; Bai, S.; Yang, S.; Wang, S.; Tan, S.; Wang, P.;
Lin, J.; Zhou, C.; and Zhou, J. 2023. Qwen-VL: A Frontier Large Vision-Language Model with Versatile Abilities.
_arXiv preprint arXiv:2308.12966._
Cai, S.; Bao, K.; Guo, H.; Zhang, J.; Song, J.; and Zheng, B.
2024. GeoGPT4V: Towards Geometric Multi-modal Large
Language Models with Geometric Image Generation. arXiv
_preprint arXiv:2406.11503._
Chen, J.; Li, D. Z. X. S. X.; Zhang, Z. L. P.; Xiong, R. K.
V. C. Y.; and Elhoseiny, M. 2023a. MiniGPT-V2: Large
Language Model as a Unified Interface for Vision-Language
Multi-task Learning. arXiv preprint arXiv:2310.09478.
Chen, L.; Li, J.; wen Dong, X.; Zhang, P.; He, C.; Wang,
J.; Zhao, F.; and Lin, D. 2023b. ShareGPT4V: Improving
Large Multi-Modal Models with Better Captions. _ArXiv,_
abs/2311.12793.
Chen, W.; Ma, X.; Wang, X.; and Cohen, W. W. 2022.
Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv
_preprint arXiv:2211.12588._
Chen, Z.; Wang, W.; Tian, H.; Ye, S.; Gao, Z.; Cui, E.;
Tong, W.; Hu, K.; Luo, J.; Ma, Z.; et al. 2024. How Far
Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites. arXiv preprint
_arXiv:2404.16821._
Dong, X.; Zhang, P.; Zang, Y.; Cao, Y.; Wang, B.; Ouyang,
L.; Wei, X.; Zhang, S.; Duan, H.; Cao, M.; et al. 2024.
InternLM-XComposer2: Mastering free-form text-image
composition and comprehension in vision-language large
model. arXiv preprint arXiv:2401.16420.
Gao, J.; Pi, R.; Zhang, J.; Ye, J.; Zhong, W.; Wang, Y.; Hong,
L.; Han, J.; Xu, H.; Li, Z.; et al. 2023a. G-LLaVA: Solving Geometric Problem with Multi-Modal Large Language
Model. arXiv preprint arXiv:2312.11370.
Gao, P.; Han, J.; Zhang, R.; Lin, Z.; Geng, S.; Zhou, A.;
Zhang, W.; Lu, P.; He, C.; Yue, X.; Li, H.; and Qiao, Y.
2023b. LLaMA-Adapter V2: Parameter-Efficient Visual Instruction Model. arXiv preprint arXiv:2304.15010.
Gao, P.; Zhang, R.; Liu, C.; Qiu, L.; Huang, S.; Lin, W.;
Zhao, S.; Geng, S.; Lin, Z.; Jin, P.; et al. 2024. SPHINX-X:
Scaling Data and Parameters for a Family of Multi-modal
Large Language Models. arXiv preprint arXiv:2402.05935.
Gemini Team, G. 2023. Gemini: a family of highly capable
multimodal models. arXiv preprint arXiv:2312.11805.
Gou, Z.; Shao, Z.; Gong, Y.; Yang, Y.; Huang, M.; Duan,
N.; Chen, W.; et al. 2023. Tora: A tool-integrated reasoning agent for mathematical problem solving. arXiv preprint
_arXiv:2309.17452._
Kingma, D. P.; and Ba, J. 2014. Adam: A method for
stochastic optimization. arXiv preprint arXiv:1412.6980.
LI, J.; Beeching, E.; Tunstall, L.; Lipkin, B.; Soletskyi,
R.; Huang, S. C.; Rasul, K.; Yu, L.; Jiang, A.; Shen, Z.;
Qin, Z.; Dong, B.; Zhou, L.; Fleureau, Y.; Lample, G.; and
Polu, S. 2024. NuminaMath. https://huggingface.co/AIMO/NuminaMath-CoT.
Li, J.; Li, D.; Xiong, C.; and Hoi, S. 2022. Blip: Bootstrapping language-image pre-training for unified visionlanguage understanding and generation. In International
_conference on machine learning, 12888–12900. PMLR._
Liu, H.; Li, C.; Li, Y.; and Lee, Y. J. 2024a. Improved
baselines with visual instruction tuning. In Proceedings of
_the IEEE/CVF Conference on Computer Vision and Pattern_
_Recognition, 26296–26306._
Liu, H.; Li, C.; Li, Y.; Li, B.; Zhang, Y.; Shen, S.; and Lee,
Y. J. 2024b. LLaVA-NeXT: Improved Reasoning, OCR, and
World Knowledge.
Liu, H.; Li, C.; Wu, Q.; and Lee, Y. J. 2023. Visual Instruction Tuning. In NeurIPS.
Liu, H.; and Yao, A. C.-C. 2024. Augmenting math word
problems via iterative question composing. arXiv preprint
_arXiv:2401.09003._
Lu, H.; Liu, W.; Zhang, B.; Wang, B.; Dong, K.; Liu, B.;
Sun, J.; Ren, T.; Li, Z.; Sun, Y.; et al. 2024a. Deepseek-VL:
Towards Real-world Vision-Language Understanding. arXiv
_preprint arXiv:2403.05525._
Lu, P.; Bansal, H.; Xia, T.; Liu, J.; Li, C.; Hajishirzi, H.;
Cheng, H.; Chang, K.-W.; Galley, M.; and Gao, J. 2024b.
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts. In The Twelfth International
_Conference on Learning Representations._
Mitra, A.; Khanpour, H.; Rosset, C.; and Awadallah, A.
2024. Orca-math: Unlocking the potential of slms in grade
school math. arXiv preprint arXiv:2402.14830.
OpenAI. 2023a. GPT-4 Technical Report. _ArXiv,_
abs/2303.08774.
OpenAI. 2023b. GPT-4V(ision) System Card.
OpenAI. 2024a. GPT-4o mini: advancing cost-efficient intelligence.
OpenAI. 2024b. GPT-4o System Card.
Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.;
Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Gray, A.;
Schulman, J.; Hilton, J.; Kelton, F.; Miller, L.; Simens, M.;
Askell, A.; Welinder, P.; Christiano, P.; Leike, J.; and Lowe,
R. 2022. Training language models to follow instructions
with human feedback. In Advances in Neural Information
_Processing Systems._
Qiao, R.; Tan, Q.; Dong, G.; Wu, M.; Sun, C.; Song, X.;
GongQue, Z.; Lei, S.; Wei, Z.; Zhang, M.; et al. 2024.
We-Math: Does Your Large Multimodal Model Achieve
Human-like Mathematical Reasoning? _arXiv preprint_
_arXiv:2407.01284._
Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.;
Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.;
-----
et al. 2021. Learning Transferable Visual Models from Natural Language Supervision. In International conference on
_machine learning, 8748–8763. PMLR._
Reid, M.; Savinov, N.; Teplyashin, D.; Lepikhin, D.; Lillicrap, T.; Alayrac, J.-b.; Soricut, R.; Lazaridou, A.; Firat, O.;
Schrittwieser, J.; et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.
_arXiv preprint arXiv:2403.05530._
Saxton, D.; Grefenstette, E.; Hill, F.; and Kohli, P. 2019.
Analysing mathematical reasoning abilities of neural models. arXiv preprint arXiv:1904.01557.
Shao, Z.; Wang, P.; Zhu, Q.; Xu, R.; Song, J.; Zhang, M.;
Li, Y.; Wu, Y.; and Guo, D. 2024. DeepSeekMath: Pushing
the Limits of Mathematical Reasoning in Open Language
Models. arXiv preprint arXiv:2402.03300.
Shi, W.; Hu, Z.; Bin, Y.; Liu, J.; Yang, Y.; Ng, S.-K.; Bing,
L.; and Lee, R. K.-W. 2024. Math-LLaVA: Bootstrapping
Mathematical Reasoning for Multimodal Large Language
Models. arXiv preprint arXiv:2406.17294.
TIGER-Lab. 2024. VisualWebInstruct.
Tong, Y.; Zhang, X.; Wang, R.; Wu, R.; and He, J. 2024.
DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving. arXiv preprint arXiv:2407.13690.
Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.;
Chi, E.; Le, Q. V.; Zhou, D.; et al. 2022. Chain-ofthought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:
24824–24837.
Wu, T.; Tao, C.; Wang, J.; Zhao, Z.; and Wong, N. 2024.
Rethinking Kullback-Leibler Divergence in Knowledge Distillation for Large Language Models. _arXiv preprint_
_arXiv:2404.02657._
Yang, A.; Yang, B.; Hui, B.; Zheng, B.; Yu, B.; Zhou, C.; Li,
C.; Li, C.; Liu, D.; Huang, F.; et al. 2024. Qwen2 technical
report. arXiv preprint arXiv:2407.10671.
Ye, Q.; Xu, H.; Ye, J.; Yan, M.; Hu, A.; Liu, H.; Qian, Q.;
Zhang, J.; and Huang, F. 2024. mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality
Collaboration. In Proceedings of the IEEE/CVF Conference
_on Computer Vision and Pattern Recognition, 13040–13051._
Yu, L.; Jiang, W.; Shi, H.; Yu, J.; Liu, Z.; Zhang, Y.; Kwok,
J. T.; Li, Z.; Weller, A.; and Liu, W. 2023. Metamath: Bootstrap your own mathematical questions for large language
models. arXiv preprint arXiv:2309.12284.
Yue, X.; Qu, X.; Zhang, G.; Fu, Y.; Huang, W.; Sun, H.; Su,
Y.; and Chen, W. 2023. Mammoth: Building math generalist
models through hybrid instruction tuning. _arXiv preprint_
_arXiv:2309.05653._
Zhai, X.; Mustafa, B.; Kolesnikov, A.; and Beyer, L. 2023.
Sigmoid loss for language image pre-training. In Proceed_ings of the IEEE/CVF International Conference on Com-_
_puter Vision, 11975–11986._
Zhang, R.; Jiang, D.; Zhang, Y.; Lin, H.; Guo, Z.; Qiu, P.;
Zhou, A.; Lu, P.; Chang, K.-W.; Gao, P.; et al. 2024. MathVerse: Does Your Multi-modal LLM Truly See the Diagrams
in Visual Math Problems? arXiv preprint arXiv:2403.14624.
-----
| [
"Wenwen, Zhuang",
"Xin, Huang",
"Xiantao, Zhang",
"Jin, Zeng"
] | 2024-08-16T00:00:00 | null | false | 1 | 0 | null | http://arxiv.org/abs/2408.08640 | https://arxiv.org/abs/2408.08640 | https://www.semanticscholar.org/paper/380694db7a04f9a199b1f0a25ece648e814bebe6 |
MathCAMPS: Fine-grained Synthesis of Mathematical Problems From Human Curricula | Mathematical problem solving is an important skill for Large Language Models (LLMs), both as an important capability and a proxy for a range of reasoning abilities. Existing benchmarks probe a diverse set of skills, but they yield aggregate accuracy metrics, obscuring specific abilities or weaknesses. Furthermore, they are difficult to extend with new problems, risking data contamination over time. To address these challenges, we propose MathCAMPS: a method to synthesize high-quality mathematical problems at scale, grounded on 44 fine-grained "standards" from the Mathematics Common Core (CC) Standard for K-8 grades. We encode each standard in a formal grammar, allowing us to sample diverse symbolic problems and their answers. We then use LLMs to realize the symbolic problems into word problems. We propose a cycle-consistency method for validating problem faithfulness. Finally, we derive follow-up questions from symbolic structures and convert them into follow-up word problems - a novel task of mathematical dialogue that probes for robustness in understanding. Experiments on 23 LLMs show surprising failures even in the strongest models (in particular when asked simple follow-up questions). Moreover, we evaluate training checkpoints of Pythia 12B on MathCAMPS, allowing us to analyze when particular mathematical skills develop during its training. Our framework enables the community to reproduce and extend our pipeline for a fraction of the typical cost of building new high-quality datasets. | This work proposes MathCAMPS: a method to synthesize high-quality mathematical problems at scale, grounded on 44 fine-grained standards from the Mathematics Common Core (CC) Standard for K-8 grades, and proposes a cycle-consistency method for validating problem faithfulness. | ## MathCAMPS: Fine-grained Synthesis of Mathematical Problems From Human Curricula
**Shubhra Mishra[∗][,][1]** **Gabriel Poesia[∗][,][1]** **Belinda Mo[1]** **Noah D. Goodman[1][,][2]**
```
{shubhra,poesia,ngoodman}@stanford.edu [email protected]
```
Departments of Computer Science[1] and Psychology[2], Stanford University
**Abstract**
Mathematical problem solving is an important skill for Large Language Models
(LLMs), both as an important capability and a proxy for a range of reasoning
abilities. Existing benchmarks probe a diverse set of skills, but they yield aggregate
accuracy metrics, obscuring specific abilities or weaknesses. Furthermore, they are
difficult to extend with new problems, risking data contamination over time. To
address these challenges, we propose MathCAMPS: a method to synthesize highquality mathematical problems at scale, grounded on 44 fine-grained “standards”
from the Mathematics Common Core (CC) Standard for K-8 grades. We encode
each standard in a formal grammar, allowing us to sample diverse symbolic problems and their answers. We then use LLMs to realize the symbolic problems into
word problems. We propose a cycle-consistency method for validating problem
faithfulness. Finally, we derive follow-up questions from symbolic structures and
convert them into follow-up word problems—a novel task of mathematical dialogue that probes for robustness in understanding. Experiments on 23 LLMs show
surprising failures even in the strongest models (in particular when asked simple
follow-up questions). Moreover, we evaluate training checkpoints of Pythia 12B on
MathCAMPS, allowing us to analyze when particular mathematical skills develop
during its training. Our framework enables the community to reproduce and extend
our pipeline for a fraction of the typical cost of building new high-quality datasets.
**Introduction**
As Large Language Models (LLMs) become increasingly capable, mathematical reasoning problems
have emerged as a key benchmark for evaluating their abilities. Mathematical reasoning is a critical
subproblem of many important tasks, such as scientific question answering and quantitative data
analysis, making it a prerequisite for a range of downstream applications. Moreover, mathematical
reasoning tests a broad spectrum of reasoning skills, serving as a valuable proxy for assessing
reasoning capabilities more generally. Consequently, several benchmarks, notably GSM8K [9] and
MATH [14], became popular measures of the progress of LLMs, with each new generation of models
demonstrating rapid advancements.
However, the classical approach to benchmarking in Machine Learning, which involves evaluating
models on a fixed set of human-created problems, faces new fundamental challenges in the era of
LLMs. First, these models are trained on massive public datasets that may unintentionally include
the very benchmarks used for evaluation, raising concerns about data contamination [4, 8, 26]. This
problem is exacerbated by the lack of access to the training data of most state-of-the-art LLMs, such
as GPT-4 [1], Claude [2], and even open-weight models, such as LLaMA [25]. Evaluating LLMs
on novel problems could mitigate the data contamination concerns. But creating new mathematical
problems is challenging. Crafting new high-quality problems requires expertise and is expensive;
_∗_ Equal contribution.
Preprint. Under review.
-----
Figure 1: Overview of the MathCAMPS generation pipeline. We start from a grammar (A) that
represents problems tied to a Common Core Standard - a specific mathematical ability drawn from a
human curriculum. We sample problems in a symbolic form (B), and use a language model to realize
it in natural language (C), applying a cycle-consistency where we back-translate the problem into
symbolic form and ensure the answer remains the same, validating truthfulness. We also synthesize
incremental and counterfactual follow-up problems
sourcing problems from public sources does not address the question of whether LLMs might have
been trained on those problems.
Moreover, while existing benchmarks serve to track overall progress in the field, they do not inform
us about what mathematical abilities current language models do and do not have. A single aggregate
accuracy — in a topic as diverse as mathematics — does not provide insights into specific capabilities
or challenges for current language models, and how those have been changing over time. For instance,
GPT-4 [1] improved by 35% on GSM8K when compared to GPT-3 [7]; yet, it is still challenging to
understand which improved capabilities might have accounted for this improvement (e.g., arithmetic
with larger numbers, proficiency with fractions or decimals, or understanding of longer problems).
Such an analysis would help shed light on open questions about language model learning, and how it
relates to (or diverges from) human learning.
To address these challenges, we propose the Mathematics Common Core Assessment of Problem
Solving — MathCAMPS — a framework for synthesizing high-quality mathematical word problems at
scale. Our approach is grounded in the Mathematics Common Core (CC) Standards from Kindergarten
through 8th grade. The CC standardizes a mathematics curriculum adopted by thousands of schools,
describing specific abilities that students should learn by each grade. By constructing MathCAMPS in
direct relation to the CC, our benchmark enables a series of rich analyses of mathematical proficiency
in language models, allowing direct parallels to abilities that human students are also evaluated on.
We encode the skills described in the CC (namely the standards) in a grammar that allows us to
sample an arbitrary number of diverse problems targeting that skills (e.g., word problems involving
addition of decimals, or solving systems of equations with fractions), represented symbolically.
Our pipeline uses a symbolic solver (SymPy) to obtain answers to the symbolic problems, and
employs an LLM to realize those into word problems. We introduce a cycle-consistency method to
validate whether a word problem faithfully represents the original symbolic problem. Prompting the
LLM to back-translate the word problem into a symbolic structure and comparing the new answer to
the original enables us to eliminate most unfaithful generation errors and maintain high quality.
Furthermore, building on our symbolic representation of problem structures, we introduce a novel
task of “mathematical dialogue”. In this task, once the LLM answers a problem correctly, we
ask a follow-up question to further probe understanding. We introduce two types of follow-up
problems: counterfactual, where we modify an aspect of the original problem and request an updated
-----
answer, and incremental, where we provide additional information and ask for a new answer. These
questions require simultaneously understanding the original problem and the LLM’s own solution —
an additional challenge that several models struggle with.
Using our framework, we synthesize problems for each of 44 CC standards, resulting in a dataset
of 4,900 initial problems. We also generate follow-up questions (incremental and conterfactual)
in standards where those apply, yielding 9607 total problems. We evaluate a suite of 23 language
models, both proprietary and open. Our analysis uncovers surprising failures, particularly in response
to simple follow-up questions, revealing notable gaps even in strong models. Moreover, to the best of
our knowledge, we perform the first analysis of the learning dynamics of mathematical skills during
LLM training, leveraging checkpoints of Pythia 12B [6]. Our contributions are:
- We present MathCAMPS, a framework for synthesizing high-quality mathematical word
problems at scale, stratified into fine-grained capabilities defined by the Mathematics Common Core Standards for K-8 grades. We release 9607 problems and our extensible pipeline
to generate arbitrarily many more.
- We introduce a cycle-consistency method to validate the faithfulness of the generated word
problems to their underlying symbolic structures.
- We propose a novel task of “mathematical dialogue,” featuring counterfactual and incremental follow-up questions that probe the models’ understanding more deeply.
- We evaluate a diverse set of 23 language models on our dataset, revealing surprising failures
and gaps in performance, even in strong models.
**2** **Related Work**
Our work closely relates to (i) current benchmarks of mathematical reasoning in LLMs, (ii) benchmarks constructed using LLMs, and (iii) behavioral testing and applications in NLP.
**Benchmarks of mathematical reasoning** MATH [14] and GSM8K [9] have been two leading
benchmarks for the evaluation of mathematical reasoning in LLMs. Both datasets consist entirely
of human-authored problems — a process that is expensive to reproduce, and as a result, neither
benchmarks were updated since their initial releases. Given that LLMs are trained on Web data, it is
unclear whether they might have been trained on the test problems of these benchmarks [8] – either
directly or from other sources (e.g., all problems in MATH come from past public competitions). In
fact, GSM1K [26], a new dataset that independently attempted to reproduce the data distribution of
GSM8K, has found reduced performance on several models, suggesting test set contamination.
**LLM-generated synthetic datasets for LLMs** As collecting data from human annotaotors at scale
is expensive (especially in domains requiring expertise, such as mathematics), prior work has relied
on LLMs to aid the generation of large-scale benchmarks [12]. BigToM [10], a benchmark of social
reasoning in LLMs, applied the idea of symbolically scaffolding questions for the LLM to realize in
natural language, an approach that we transport to mathematics. Dyval [27] proposed a method for
generating reasoning problems for LLMs based on a DAG representing the computation. While Dyval
contains two mathematical tasks (arithmetic and solving linear equations), MathCAMPS takes this
idea further for mathematical reasoning, spanning 44 skills directly grounded on a human curriculum.
**Behavioral testing in NLP** Our goal to provide a fine-grained evaluation of mathematical reasoning
has parallels with behavioral testing — the idea of testing software systems on specific features, as
opposed to just their overall adequacy [20]. In particular, CheckList [20] allowed testing machine
translation models for fine-grained failure modes. Dynaboard [18] proposed an NLP leaderboard
where users can adapt to their own needs by choosing the utility of different metrics; our dataset
enables a similar user-customizable comparison between models for mathematical reasoning.
**3** **MathCAMPS**
We now describe our pipeline for automatically generating mathematical problems and follow-up
[questions that are grounded in a human curriculum – the Mathematics Common Core (https:](https://www.thecorestandards.org)
-----
```
//www.thecorestandards.org). Figure 1 overviews our pipeline. We describe the Common Core,
```
how we represent its standards in a grammar, sample symbolic problems, generate follow-ups, realize
those in natural language, and finally improve quality by checking for cycle consistency.
**3.1** **The Mathematics Common Core**
To ground problems in a human curriculum, we turn to the Common Core State Standards for
Mathematics. 41 states in the United States adopt the CC as their curriculum. The CC details the
mathematical content that students should master from Kindergarten up to 12th grade. Within each
grade, the CC elaborates a series of individual standards, which detail a particular mathematical skill
that students should learn at that grade. Each standard has an identifier, such as K.CC.C.7, and a
summary description — for K.CC.C.7, this is “Compare two numbers between 1 and 10 presented as
written numerals”. Here, K indicates that this is a standard for the Kindergarten grade level, whereas
```
8.EE.C.8 — “Analyze and solve pairs of simultaneous linear equations” — is an 8th grade standard.
```
We take 44 standards spanning grades K through 8 to compose MathCAMPS, focusing on standards
that are amenable to automatic problem generation with a final answer in text form. The complete
CC curriculum has 229 standards across grades K through 8, bring our coverage to 19.2% of the
curriculum for these grades. Notably, we currently do not cover standards focusing on conceptual
understanding (e.g., 3.OA.D.9 – “Identify arithmetic patterns [...], and explain them using properties
of operations.”), or standards that emphasize visual reasoning (e.g., 6.G.A.4 – “Represent threedimensional figures using nets made up of rectangles and triangles, and use the nets to find the surface
area of these figures.”). All 44 standards covered in MathCAMPS are listed in Appendix A.
**Representing Common Core standards** We represent CC standards as non-terminals in an at_tribute grammar [13] — a rich formalism that can encode semantic, context-sensitive rules. Attribute_
grammars can encode syntax much like a context-free grammar, but also allow us to embed information processing (e.g., setting and testing conditions on attributes, such as bounds on constants) in the
production rules. We map each standard s to a non-terminal Ps, such that all strings produced by
expanding Ps using production rules are valid symbolic representations of a problem pertaining to
standard i. Figure 1 shows a (simplified) grammar for the standard 1.OA.A.1 – “Use addition and
subtraction within 20 to solve word problems involving situations of adding to, taking from, putting
together”. Here, a word problem, generated by the Problem non-terminal, consists of a sequence of
declarative statements expressing equations between expressions. For this standard, an expression
consists of addition, subtraction, variables, and constants. After these declarations, the problem ends
with a question — an expression representing the value that the problem asks for. Concretely, our
grammar is implemented in Python: each non-terminal becomes a stochastic function that samples
and applies a production rule, recursively expanding non-terminals that it produces. In the grammar
in Figure 1 (A), sampling a Problem generates a structure such as the one shown in Figure 1 (B).
**Enforcing problem constraints** When sampling problems, there is no a priori guarantee that all
generated statements are necessary to answer the question. To avoid such statements, we remove
them by applying a simple graph reachability algorithm on a dependency graph between statements,
removing statements that the answer does not depend on. This enforces the constraint of only having
useful statements in problems. Besides this constraint, which we always enforce, each standard can
apply specific constraints. The standard 1.OA.A.1 has an example of such constraint: it requires that
students only be asked to use “addition and subtraction within 20.” To be faithful to this standard,
we must validate that no intermediate values used in the solution exceed 20. To encode this and
other constraints across the curriculum, we implement a suite of 6 parameterized filters (detailed
in Appendix C) that are selectively applied depending on the standard’s specification. Applying
rejection sampling from the grammar using the standard’s filters gives a procedure for generating
valid symbolic problems. For all standards that can be formulated as solving a system of linear
equations, we use SymPy [19] to obtain final answers. For other cases, we use two simple custom
procedures (to list the factors of numbers and to compare values).
**3.2** **From symbolic to word problems**
To realize the symbolic problems into natural language, we use few-shot prompting with GPT-4
(Figure 1 (C)). For each standard, we sampled two valid symbolic problems and manually wrote a
-----
problem in natural language that faithfully represents the symbolic structure. For standards involving
word problems, which typically contain a simple cover story, we also sampled a random theme out of
188 that we crafted (e.g., “Book”, “Pirate ship”, “Money”). These examples are then given to GPT-4
in-context, along with a new symbolic structure (and a random theme, for standards where that is
relevant), requesting it to generate a faithful natural language problem for that structure.
Unlike generating problem stories from a fixed set of templates, using a language model for generating
natural language problems gives us fluid, diverse language. Unfortunately, we also lose any guarantee
that the generated word problem represents the original symbolic structure faithfully. To mitigate
this issue, we also introduce a cycle consistency method that we have found to drastically improve
problem quality. Precisely, we use the same few-shot examples we crafted for each standard in
_reverse (i.e., with the natural language problem coming first, followed by the symbolic structure) to_
have GPT-4 translate the word problem it wrote into a symbolic structure. In this step, the model
is not given the original structure. We then parse and apply the appropriate solver to the generated
symbolic problem; we consider the generation cycle-consistent if the answers to the original and
recovered problems are the same (illustrated in Figure 1). We then discard problems that fail this test.
This cycle consistency test significantly improves the reliability of our pipeline. We manually
evaluated 245 random problems generated by sampling a symbolic structure and then a word problem
from GPT-4. Out of those, we identified 30 word problems (12.2%) that were not faithful to the
original symbolic structure — for those, the answer that we compute to the symbolic problem does
not match our manual solution to the word problem. Cycle consistency discarded 25 of those (and 7
problems that were indeed faithful). Out of the remaining 215 problems, 210 (97.7%) were judged as
faithful in our manual check.
**3.3** **Generating follow-up questions**
As human instructors know, follow-up questions are often the best way to probe a student’s understanding. In MathCAMPS, we leverage our symbolic representation of problems to derive follow-up
questions. We propose two kinds of questions: counterfactual questions, where we change a constant
in the original problem, and incremental questions, where we add a new piece of information. For
each CC standard, we mark which (if any) of these two categories of follow-ups are applicable. Symbolically, follow-up questions are represented as a difference to be applied to the original question —
when we apply the difference, we obtain a new problem. We then use the same solver as the original
problem to obtain the ground-truth answer to the follow-up question. We employ the same few-shot
structure to translate this difference into a natural language question, and parse it back into a symbolic
structure to test for cycle consistency.
**4** **Experiments**
We now evaluate a suite of 23 LLMs from 8 different vendors on MathCAMPS. We evaluate all models
by sampling with temperature 0, using a fixed 1-shot prompt with the first example from GSM8K,
mostly to demonstrate the format. For all models (most of them instruction-tuned), a single example
was enough for to adhere to the task and the format we specify. The rich structure in MathCAMPS
allows us to perform a number of unique analyses on LLMs relating to specific mathematical abilities
and their corresponding grade levels for human students. Precisely, we investigate: (1)How do LLMs
perform overall on MathCAMPS? How does their performance correlate with GSM8k? (2) Do
individual models have relative strengths and weaknesses, or does performance improve uniformly
across skills? (3) How well do LLMs respond to follow-up questions? How is their accuracy affected
when also considering follow-ups?
**4.1** **Overall performance**
Table 1 shows both aggregate accuracy on MathCAMPS, as well as accuracy across standards
partitioned by grade, whereas Figure 3 compares the aggregate accuracies on MathCAMPS and
GSM8K. Closed-weights models are shown above the line, with open-weights models below. GPT-4o
ranks at the top in overall accuracy. Since we used GPT-4 to generate the problems, we must rule out
familiarity bias [22] in this result. We thus generated a 10%-scale dataset with the same pipeline but
using Claude-3 Opus. We found that GPT-4o still outperforms Claude-3 Opus on this dataset (see
-----
Table 1: Final answer accuracy of LLMs on MathCAMPS, both over all problems (All) and considering only standards in each grade we cover (K to 8). Highlights compare to gradewise avg.
**Vendor** **Model** **All** **K** **1** **2** **3** **4** **5** **6** **7** **8**
OpenAI GPT-4o [1] 0.92 0.98 0.98 0.98 0.98 0.92 0.88 0.95 0.89 0.64
Anthropic Claude-3 Opus [2] 0.89 0.97 0.99 0.96 0.98 0.89 0.83 0.96 0.73 0.56
Google Gemini-1.5 Pro [23] 0.89 0.95 0.98 0.97 0.97 0.89 0.83 0.93 0.78 0.54
Google Gemini-1.5 Flash [23] 0.87 0.98 0.98 0.97 0.98 0.80 0.80 0.90 0.84 0.56
OpenAI GPT-3.5 Turbo [1] 0.87 0.96 0.98 0.98 0.97 0.86 0.77 0.90 0.77 0.56
Anthropic Claude-3 Sonnet [2] 0.86 0.96 0.98 0.97 0.98 0.88 0.74 0.94 0.66 0.49
Anthropic Claude-3 Haiku [2] 0.84 0.97 0.98 0.97 0.98 0.87 0.69 0.92 0.59 0.51
Meta Llama 3 70B [25] 0.85 0.96 0.97 0.97 0.97 0.85 0.71 0.87 0.73 0.50
Mistral Mixtral 8x22B [16] 0.84 0.96 0.99 0.98 0.96 0.79 0.69 0.88 0.73 0.61
DeepSeek DeepSeek 67B [5] 0.80 0.95 0.99 0.96 0.93 0.82 0.60 0.84 0.61 0.47
Meta Llama 3 8B [25] 0.77 0.94 0.97 0.96 0.94 0.78 0.55 0.79 0.53 0.43
Mistral Mixtral 8x7B [16] 0.76 0.94 0.96 0.93 0.91 0.75 0.52 0.80 0.53 0.45
EleutherAI Llemma 34B [3] 0.71 0.95 0.96 0.93 0.87 0.61 0.47 0.77 0.46 0.44
Mistral Mistral 7B [15] 0.68 0.89 0.94 0.91 0.84 0.61 0.42 0.66 0.45 0.42
DeepSeek DeepSeek Coder 33B [11] 0.65 0.88 0.93 0.92 0.83 0.54 0.36 0.66 0.44 0.38
Meta CodeLlama 34B [21] 0.64 0.90 0.94 0.92 0.85 0.51 0.38 0.70 0.37 0.30
Microsoft phi-2 [17] 0.63 0.95 0.96 0.89 0.78 0.46 0.38 0.61 0.37 0.41
EleutherAI Llemma 7B [3] 0.62 0.88 0.90 0.85 0.79 0.48 0.41 0.67 0.41 0.36
Google Gemma 7B [24] 0.62 0.83 0.92 0.90 0.82 0.47 0.36 0.65 0.36 0.30
Meta CodeLlama 13B [21] 0.58 0.87 0.92 0.87 0.75 0.41 0.30 0.61 0.32 0.34
Meta CodeLlama 7B [21] 0.52 0.85 0.92 0.84 0.69 0.37 0.25 0.57 0.25 0.16
Google Gemma 2B [24] 0.51 0.66 0.76 0.74 0.67 0.42 0.28 0.55 0.30 0.27
- Avg. Performance 0.74 0.87 0.91 0.89 0.87 0.70 0.59 0.78 0.57 0.38
Appendix B), suggesting that its advantage on MathCAMPS was not due to a familiarity bias. We
make the following observations:
**Models of similar overall performance can have large disparities in specific abilities or grades.**
Several models that have comparable overall accuracies show large differences when compared
on specific mathematical skills. As an example, Claude-3 Opus and Claude-3 Sonnet have similar
overall accuracy both in MathCAMPS (.89 vs .86) and in GSM8K (.95 vs .923). However, we
find that Claude-3 Opus is significantly better at manipulating fractions. For instance, in the CC
standard 5.NF.A.2, described as “Solve word problems involving addition and subtraction of
_fractions referring to the same whole, including cases of unlike denominators”, Opus has a 36%_
advantage over Sonnet, scoring a 70% accuracy for this standard, whereas Sonnet only achieves 34%.
Similarly, while Gemma 7B and phi-2 have comparable overall performance (.62 vs .63 accuracy
on MathCAMPS), some capabilities in each model seem nearly absent from the other. Gemma
7B is highly accurate when performing multi-digit multiplication — an ability stressed in standard
```
4.NBT.B.4, where Gemma 7B achieves 94% accuracy. In stark contrast, phi-2 only solves 22% of
```
those problems. On the other direction, phi-2 is one of the highest performing models on 4.NF.A.2
(“Compare two fractions with different numerators and different denominators”), with 90% accuracy.
In this same standard, Gemma 7B only scores 19%. Such stark differences are obscured when only
analyzing aggregate metrics, whereas MathCAMPS allows for a much more nuanced understanding
of mathematical reasoning capabilities.
**Overall ranking between models is largely a function of which skills we choose to evaluate.**
Overall accuracies in any dataset induce a single performance ranking of models. However, when
we look at individual CC standards in MathCAMPS, rankings are largely a function of which skills
we choose to evaluate. Comparing pairs of models across all standards, rarely we find cases where
one model Pareto-dominates another (i.e. is better on all standards): only 23.08% of all pairs of
models have a Pareto winner. Table 2 shows how the ranking of a model in individual skills can
often deviate strongly from its overall ranking. Here, the first ordinal in each cell shows the model’s
global ranking when comparing overall performance in MathCAMPS, whereas the second shows
the model’s ranking on that particular CC standard. We find many cases of large discrepancies. For
instance, on systems of equations, GPT-4o tends to excessively rely on decimal approximations when
operating with fractions, resulting in poor performance. Llemma 34B, which places 13th overall, is
the best performing model on a simple kindergarten-level word problems on adding to complete 10.
-----
Table 2: Largest model rank changes when focusing on one CC standard. Here, A B indicates that
the model ranks A[th] on MathCAMPS overall, but ranks B[th] when only evaluating on problems from
the indicated CC standard. Conversely, marks notable cases where a model’s performance on the
indicated CC standard is lower than its overall performance on MathCAMPS. We show selected rows
here, the complete table can be found in the Appendix.
**Model** **Top outlier skill** **Rank change**
GPT-4o 8.EE.C.8 - Solve two-variable systems (1[st] 22[th])
Claude-3 Opus 2.MD.B.5 - Add/sub within 100 (2[nd] 13[th])
Gemini-1.5 Pro K.OA.A.4 - Adding to equal 10 (3[rd] 19[th])
Gemini-1.5 Flash 4.OA.B.4 - Factor pairs within 100 (4[th] 20[th])
Claude-3 Haiku 3.OA.A.4 - Determine unknowns in mul/div probs (9[th] 1[st])
Llama 3 70B K.OA.A.4 - Adding to equal 10 (7[th] 17[th])
DeepSeek 67B K.NBT.A.1 - Decompose into 10s (10[th] 1[st])
Llemma 34B K.OA.A.4 - Adding to equal 10 (13[th] 1[st])
Mistral 7B 1.OA.A.1 - Add/sub within 20 (14[th] 21[th])
DeepSeek Coder 33B 6.EE.A.1 - Evaluate exponents (15[th] 3[rd])
Llemma 7B 6.EE.A.1 - Evaluate exponents (18[th] 5[th])
Gemma 2B 8.EE.C.8 - Solve two-variable systems (22[th] 11[th])
**Aggregate accuracies are strongly correlated between GSM8k and MathCAMPS** When considering overall performance, the trends in GSM8k hold on the novel problems from MathCAMPS,
which cover overlapping topics (Pearson correlation of 0.865, p < 10[−][5]; we show this correlation
in Figure 3). This correlation corroborates the progress that public benchmarks have witnessed,
suggesting that data contamination does not play a major role in explaining observed improvements in
recent LLMs. We note that prior work attempting to replicate the distribution of GSM8k, such as the
independent effort to collect GSM1k [26], has observed a smaller correlation, including substantial
drops in performance for some models. This is entirely compatible with our findings here, due to the
difficulty of exactly replicating the distribution over skills in any given human-created benchmark.
As the sharp differences in Table 2 indicate, an (unintended) shift in this distibution can drastically
— and unevenly — affect accuracy, even if no data contamination occurs. These shifts are easily
avoided in an automated pipeline as in MathCAMPS, allowing us to draw new problems from the
exact same distribution in the future.
**4.2** **Follow-up tasks**
We now evaluate the performance of language models when asked follow-up questions. Here, we
first give the initial problem, and in case the model answers correctly we ask either an incremental
follow-up, a counterfactual follow-up, or both (in separate contexts), depending on the standard
(some standards don’t have follow-ups, and for some problems we failed to find a cycle-consistent
follow-up within the max attempts). Here, we’re interested in analyzing the (lack of) robustness
that LMs might have when probed with extra questions — our follow-ups are generally answerable
using the same core mathematical knowledge involved in the initial problem but require longer range
attention and dialog understanding.
Table 3 shows overall accuracies when we only consider a model successful on a problem when
it also answers its follow-up questions correctly. We also show the major accuracy drops across
CC standards for each model (last two columns). We find many notable cases, in both stronger
and weaker models. GPT-4o, for instance, is 90% accurate in evaluating expressions of addition of
fractions with multi-digit numerators and denominators (5.NF.A.1 — notably, this requires putting
fractions in the same denominator). When asked to add another fraction to the result, or change one
of the original fractions to a new one and re-do the computation, its success rate when evaluated
at correctly answering both follow-ups drops to 61%, or a 29% decrease. Other models drop even
more dramatically. For instance, phi-2 solves 57% of the problems in 7.NS.A.2, which are about
multiplying two fractions (only requires two multi-digit multiplications — we do not require the result
to be in lowest terms). However, when asked to multiply the result by a further third fraction, phi-2
tends to not reuse its previous (correct) result, and instead proceeds by writing down the product of the
three numerators (and denominators), and attempt to directly evaluate this product. This strategy is
-----
Table 3: Model performance on our mathematical dialogue task, where the model must answer
follow-up questions besides the initial problem. The second column, Accuracy with follow-ups,
shows overall success rate across standards that contain follow-up questions, considering a model
successful only when it answers a problem and its follow-up questions correctly. The third and fourth
columns show the hardest standard for each model when it comes to follow-up questions, showing
a standard’s code and abbreviated description, the model’s accuracy ignoring follow-ups, and after
follow-ups.
**Model** **Acc. with follow-ups** **Largest accuracy drop w/ follow-ups**
GPT-4o 0.82 5.NF.A.1 - Add/sub fractions 0.90 0.61)
Claude-3 Opus 0.76 7.NS.A.1-fraction - Add/sub with fractions 0.57 0.25)
Gemini-1.5 Pro 0.77 5.NF.A.1 - Add/sub fractions 0.60 0.35)
Gemini-1.5 Flash 0.76 7.NS.A.1-fraction - Add/sub with fractions 0.78 0.38)
GPT-3.5 Turbo 0.71 7.NS.A.1-fraction - Add/sub with fractions 0.73 0.22)
Claude-3 Sonnet 0.72 5.NF.A.1 - Add/sub fractions 0.41 0.07)
Claude-3 Haiku 0.70 3.OA.A.3 - Mul/div within 100 1.00 0.73)
Llama 3 70B 0.69 4.NF.A.2 - Compare two fractions 0.99 0.66)
Mixtral 8x22B 0.69 7.NS.A.1-fraction - Add/sub with fractions 0.69 0.18)
DeepSeek 67B 0.68 6.NS.B.3 - Add/sub/mult/div decimals 0.59 0.37)
Llama 3 8B 0.58 4.NF.A.2 - Compare two fractions 0.90 0.52)
Mixtral 8x7B 0.58 5.NF.B.4 - Mult fractions 0.61 0.31)
Llemma 34B 0.55 5.NF.B.4 - Mult fractions 0.69 0.33)
Mistral 7B 0.48 7.NS.A.1-decimal - Add/sub with decimals 0.91 0.50)
DeepSeek Coder 33B 0.60 3.OA.A.3 - Mul/div within 100 0.95 0.81)
CodeLlama 34B 0.60 5.NF.B.4 - Mult fractions 0.52 0.39)
phi-2 0.39 7.NS.A.2 - Mult/div with fractions 0.57 0.08)
Llemma 7B 0.43 5.NF.B.4 - Mult fractions 0.61 0.22)
Gemma 7B 0.33 7.NS.A.1-decimal - Add/sub with decimals 0.91 0.32)
CodeLlama 13B 0.43 4.NBT.B.4 - Add/sub multi-digit nums 0.81 0.49)
CodeLlama 7B 0.49 2.NBT.B.7 - Add/sub within 100 0.80 0.67)
Gemma 2B 0.24 3.NBT.A.2 - Add/sub within 1000 0.93 0.26)
rarely successful, and it only achieves 8% accuracy when accounting for the follow-ups (an absolute
49% drop). Overall, we find many cases where models are not robust to simple follow-up questions.
We hypothesize that this setup of mathematical dialogue is much less frequent in pre-training data,
and that follow-up problems in MathCAMPS can be a rich source of further analyses for future work.
**4.3** **Learning dynamics**
Finally, we use Pythia [6] to showcase another analysis that MathCAMPS enables: understanding the
learning dynamics of mathematical skills during LM training. We evaluate checkpoints of Pythia 12B
on all standards, and track the performance change as the model was trained. Figure 2 shows Pythia’s
performance evolving during training on all 7 CC standards where the last checkpoint achieves
at least 30% accuracy. Early in training, after 28k steps, Pythia performs best in a Kindergarten
standard, K.OA.A.5 — “Fluently add and subtract within 5.”. At 57k steps, its performance is best
in both K.OA.A.5 (37% accuracy) and two first-grade standards, 1.OA.A.1 and 1.OA.A.2 — both
standards involve simple word problems with addition and subtraction within 20. Pythia starts to
become proficient at a sixth-grade standard around midway during training: 6.EE.A.1, which involves
evaluating simple expressions using whole-number exponents (e.g, computing squares and cubes).
These skills develop in tandem with its linguistic competence – at first, Pythia repeats questions
verbatim often, but at 57k steps it already often produces responses. Overall, the high-resolution of
MathCAMPS as a reasoning benchmark can support future work to deepen our understanding of how
language models acquire capabilities during training, and how specific factors (such as data, or scale)
contribute to their learning.
**5** **Conclusion**
We introduce MathCAMPS, a fine-grained synthetic benchmark of mathematical reasoning in LLMs.
MathCAMPS is directly grounded on the Common Core Standards, a widely used curriculum in
human education. By tying our problems to a human curriculum, we enable a much wider range
-----
Figure 2: Performance of Pythia 12B checkpoints on MathCAMPS standards as it evolves during
training. We show all 7 standards where the last checkpoint has at least 30% accuracy.
of analyses to understand mathematical reasoning capabilities and weaknesses of LLMs. We show
analyses of performance by grade level and identify particularly challenging skills for a range of
models, though we believe these are only a few examples of analyses that MathCAMPS permits.
We note that MathCAMPS might also find applications in educational tools for human students, due
to its correspondence to the Common Core. Future work in that direction will require psychometric
analyses, to ensure that problem difficulty (aside from the abilities involved) is grade appropriate.
While we currently cover 44 CC standards, our pipeline can be easily extended to cover additional
standards where problems have a computational nature, and where answers can be obtained using a
computer solver. These can include topics beyond high-school, including calculus and linear algebra.
This framework, however, is difficult to extend to more conceptual problems, including mathematical
proofs, or problems that require explanations, as opposed to a final computational answer. Judging
natural language reasoning reliably, in the absence of an exact answer to compare to, remains an open
problem — an important challenge to allow us to extend the scope of evaluation of mathematical
reasoning in LLMs.
**Acknowledgments**
This work was supported by a NSF Expeditions Grant, Award Number (FAIN) 1918771. GP was
also supported by the Stanford Interdisciplinary Graduate Fellowship (SIGF).
**References**
[1] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni
Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4
technical report. arXiv preprint arXiv:2303.08774, 2023.
[2] AI Anthropic. The claude 3 model family: Opus, sonnet, haiku. Claude-3 Model Card, 2024.
[3] Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer,
Albert Q Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language
model for mathematics. arXiv preprint arXiv:2310.10631, 2023.
[4] Simone Balloccu, Patrícia Schmidtová, Mateusz Lango, and Ondˇrej Dušek. Leak, cheat,
repeat: Data contamination and evaluation malpractices in closed-source llms. arXiv preprint
_arXiv:2402.03927, 2024._
-----
[5] Xiao Bi, Deli Chen, Guanting Chen, Shanhuang Chen, Damai Dai, Chengqi Deng, Honghui
Ding, Kai Dong, Qiushi Du, Zhe Fu, et al. Deepseek llm: Scaling open-source language models
with longtermism. arXiv preprint arXiv:2401.02954, 2024.
[6] Stella Biderman, Hailey Schoelkopf, Quentin Gregory Anthony, Herbie Bradley, Kyle O’Brien,
Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit, USVSN Sai Prashanth, Edward
Raff, et al. Pythia: A suite for analyzing large language models across training and scaling. In
_International Conference on Machine Learning, pages 2397–2430. PMLR, 2023._
[7] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
[8] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece
Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi,
Marco Tulio Ribeiro, and Yi Zhang. Sparks of artificial general intelligence: Early experiments
with gpt-4, 2023.
[9] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training verifiers to solve math word problems, 2021.
[10] Kanishk Gandhi, Jan-Philipp Fränken, Tobias Gerstenberg, and Noah D. Goodman. Understanding social reasoning in language models with language models, 2023.
[11] Daya Guo, Qihao Zhu, Dejian Yang, Zhenda Xie, Kai Dong, Wentao Zhang, Guanting Chen,
Xiao Bi, Y Wu, YK Li, et al. Deepseek-coder: When the large language model meets
programming–the rise of code intelligence. arXiv preprint arXiv:2401.14196, 2024.
[12] Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, and Ece
Kamar. Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate
speech detection, 2022.
[13] Bernd Heine and Tania Kuteva. The genesis of grammar: A reconstruction, volume 9. Oxford
University Press, USA, 2007.
[14] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset,
2021.
[15] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh
Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile
Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
[16] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris
Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand,
et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024.
[17] Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat
Lee. Textbooks are all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463,
2023.
[18] Zhiyi Ma, Kawin Ethayarajh, Tristan Thrush, Somya Jain, Ledell Wu, Robin Jia, Christopher
Potts, Adina Williams, and Douwe Kiela. Dynaboard: An evaluation-as-a-service platform
for holistic next-generation benchmarking. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S.
Liang, and J. Wortman Vaughan, editors, Advances in Neural Information Processing Systems,
volume 34, pages 10351–10367. Curran Associates, Inc., 2021.
[19] Aaron Meurer, Christopher P. Smith, Mateusz Paprocki, Ondˇrej Certík, Sergey B. Kirpichev,[ˇ]
Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K. Moore, Sartaj Singh, Thilina Rathnayake, Sean Vig, Brian E. Granger, Richard P. Muller, Francesco Bonazzi, Harsh Gupta,
Shivam Vats, Fredrik Johansson, Fabian Pedregosa, Matthew J. Curry, Andy R. Terrel, Štˇepán
Rouˇcka, Ashutosh Saboo, Isuru Fernando, Sumith Kulal, Robert Cimrman, and Anthony Scopatz. Sympy: symbolic computing in python. PeerJ Computer Science, 3:e103, January
2017.
-----
[20] Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. Beyond accuracy:
Behavioral testing of nlp models with checklist. arXiv preprint arXiv:2005.04118, 2020.
[21] Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan,
Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models
for code. arXiv preprint arXiv:2308.12950, 2023.
[22] Rickard Stureborg, Dimitris Alikaniotis, and Yoshi Suhara. Large language models are inconsistent and biased evaluators. arXiv preprint arXiv:2405.01724, 2024.
[23] Gemini Team, Rohan Anil, Sebastian Borgeaud, Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,
Radu Soricut, Johan Schalkwyk, Andrew M Dai, Anja Hauth, et al. Gemini: a family of highly
capable multimodal models. arXiv preprint arXiv:2312.11805, 2023.
[24] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya
Pathak, Laurent Sifre, Morgane Rivière, Mihir Sanjay Kale, Juliette Love, et al. Gemma: Open
models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024.
[25] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,
Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open
foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
[26] Hugh Zhang, Jeff Da, Dean Lee, Vaughn Robinson, Catherine Wu, Will Song, Tiffany Zhao,
Pranav Raja, Dylan Slack, Qin Lyu, Sean Hendryx, Russell Kaplan, Michele Lunati, and
Summer Yue. A careful examination of large language model performance on grade school
arithmetic, 2024.
[27] Kaijie Zhu, Jiaao Chen, Jindong Wang, Neil Zhenqiang Gong, Diyi Yang, and Xing Xie. Dyval:
Dynamic evaluation of large language models for reasoning tasks, 2024.
-----
Table 4: CC Standards for Grade K
|Standard ID|Description|
|---|---|
|K.CC.C.7|Compare two numbers between 1 and 10 presented as written numerals.|
|K.OA.A.4|For any number from 1 to 9, find the number that makes 10 when added to the given number, e.g., by using objects or drawings, and record the answer with a drawing or equation.|
|K.OA.A.5|Fluently add and subtract within 5.|
|K.NBT.A.1|Compose and decompose numbers from 11 to 19 Into ten ones and some further ones, e.g., by using objects or drawings, and record each composition or decomposition by a drawing or equation (e.g., 18 = 10 + 8); understand that these numbers are composed of ten ones and one, two, three, four, fvie, six, seven, eight, or nine ones.|
Table 5: CC Standards for Grade 1
|Standard ID|Description|
|---|---|
|1.OA.A.1|Use addition and subtraction within 20 to solve word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using objects, draw- ings, and equations with a symbol for the unknown number to represent the problem.|
|1.OA.A.2|Solve word problems that call for addition of three whole numbers whose sum is less than or equal to 20, e.g., by using objects, drawings, and equations with a symbol for the unknown number to represent the problem.|
|1.OA.D.8|Determine the unknown whole number in an addition or subtraction equation relating three whole numbers.|
**A** **Common Core Standards in MathCAMPS**
[MathCAMPS is available on Github at https://github.com/gpoesia/mathcamps. All of the](https://github.com/gpoesia/mathcamps)
Common Core standards we implement are described in a configuration file, commoncore.yaml,
where standards are instantiated by composing high-level components from the Common Core
attribute grammar. Moreover, we provide our prompts used to generate the dataset and model
responses, as well as all problems and model responses for all LLMs we evaluated.
We list the Common Core standards we represent in MathCAMPS in Tables 4 through 12, segregated
by grade. Standards 3.MD.D.8, 4.MD.A.2, 7.NS.A.1, and 7.NS.A.3 are split up into sub-standards.
This was done for ease of implementation of the grammar.
**B** **Familiarity bias**
MathCAMPS was generated using GPT-4. GPT-4o, a model of the same family, was also the best
performer overall (Table 1). To test whether this might be due to a familiarity bias — problems being
in-distribution for GPT-4o, but out-of-distribution for other models —, we generated a 10%-scale
dataset using the exact same pipeline, but using Claude 3 Opus for both generating word problems
and testing cycle consistency. This dataset has the same distribution of standards as MathCAMPS.
We evaluated GPT-4o and Claude 3 Opus on this dataset — accuracies are reported in Table 13.
GPT-4o also performs better in this dataset, suggesting that its performance in MathCAMPS was not
due to a higher relative familiarity with the problems.
-----
Table 6: CC Standards for Grade 2
|Standard ID|Description|
|---|---|
|2.OA.A.1|Use addition and subtraction within 100 to solve one- and two-step word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with unknowns in all positions, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem.|
|2.NBT.B.5|Fluently add and subtract within 100 using strategies based on place value, properties of operations, and/or the relationship between addition and subtraction.|
|2.NBT.B.6|Add up to four two-digit numbers using strategies based on place value and properties of operations.|
|2.NBT.B.7|Add and subtract within 1000, using concrete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method. Understand that in adding or subtracting three-digit numbers, one adds or subtracts hundreds and hundreds, tens and tens, ones and ones; and sometimes it is necessary to compose or decompose tens or hundreds.|
|2.MD.B.5|Use addition and subtraction within 100 to solve word problems involv- ing lengths that are given in the same units, e.g., by using drawings (such as drawings of rulers) and equations with a symbol for the unknown number to represent the problem.|
|2.MD.C.8|Solve word problems involving dollar bills, quarters, dimes, nickels, and pennies, using $ and ¢ symbols appropriately.|
Table 7: CC Standards for Grade 3
|Standard ID|Description|
|---|---|
|3.OA.A.3|Use multiplication and division within 100 to solve word problems in situations involving equal groups, arrays, and measurement quantities, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem.|
|3.OA.A.4|Determine the unknown whole number in a multiplication or division equation relating three whole numbers.|
|3.OA.C.7|Fluently multiply and divide within 100, using strategies such as the relationship between multiplication and division (e.g., knowing that 8 × 5 = 40, one knows 40 ÷ 5 = 8) or properties of operations. By the end of Grade 3, know from memory all products of two one-digit numbers.|
|3.OA.D.8|Solve two-step word problems using the four operations. Represent these problems using equations with a letter standing for the unknown quantity. Assess the reasonableness of answers using mental computation and estimation strategies including rounding.|
|3.MD.D.8- triangle|Solve real world and mathematical problems involving perimeters of polygons, including finding the perimeter given the side lengths, finding an unknown side length, and exhibiting rectangles with the same perime- ter and different areas or with the same area and different perimeters.|
|3.MD.D.8- quadrilateral|Solve real world and mathematical problems involving perimeters of polygons, including finding the perimeter given the side lengths, finding an unknown side length, and exhibiting rectangles with the same perime- ter and different areas or with the same area and different perimeters.|
|3.MD.D.8- polygon|Solve real world and mathematical problems involving perimeters of polygons, including finding the perimeter given the side lengths, finding an unknown side length, and exhibiting rectangles with the same perime- ter and different areas or with the same area and different perimeters.|
|3.NBT.A.2|Fluently add and subtract within 1000 using strategies and algorithms based on place value, properties of operations, and/or the relationship between addition and subtraction.|
-----
Table 8: CC Standards for Grade 4
|Standard ID|Description|
|---|---|
|4.OA.A.3|Solve multistep word problems posed with whole numbers and having whole-number answers using the four operations, including problems in which remainders must be Interpreted. Represent these problems using equations with a letter standing for the unknown quantity. Assess the reasonableness of answers using mental computation and estimation strategies including rounding.|
|4.OA.B.4|Find all factor pairs for a whole number in the range 1-100. Recognize that a whole number is a multiple of each of its factors. Determine whether a given whole number in the range 1-100 is a multiple of a given one-digit number. Determine whether a given whole number in the range 1-100 is prime or composite.|
|4.NBT.B.4|Fluently add and subtract multi-digit whole numbers using the standard algorithm.|
|4.NBT.B.5|Multiply a whole number of up to four digits by a one-digit whole number, and multiply two two-digit numbers, using strategies based on place value and the properties of operations. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models.|
|4.NBT.B.6|Find whole-number quotients and remainders with up to four-digit divi- dends and one-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models.|
|4.NF.A.2|Compare two fractions with different numerators and different denom- inators, e.g., by creating common denominators or numerators, or by comparing to a benchmark fraction such as 1/2. Recognize that com- parisons are valid only when the two fractions refer to the same whole. Record the results of comparisons with symbols >, =, or <, and justify the conclusions, e.g., by using a visual fraction model.|
|4.MD.A.2- decimal|Use the four operations to solve word problems involving distances, Intervals of time, liquid volumes, masses of objects, and money, includ- ing problems involving simple fractions or decimals, and problems that require expressing measurements given in a larger unit in terms of a smaller unit. Represent measurement quantities using diagrams such as number line diagrams that feature a measurement scale.|
|4.MD.A.2- fraction|Use the four operations to solve word problems involving distances, Intervals of time, liquid volumes, masses of objects, and money, includ- ing problems involving simple fractions or decimals, and problems that require expressing measurements given in a larger unit in terms of a smaller unit. Represent measurement quantities using diagrams such as number line diagrams that feature a measurement scale.|
|4.MD.A.3|Apply the area and perimeter formulas for rectangles in real world and mathematical problems.|
-----
Table 9: CC Standards for Grade 5
|Standard ID|Description|
|---|---|
|5.OA.A.1|Use parentheses, brackets, or braces in numerical expressions, and eval- uate expressions with these symbols.|
|5.NBT.B.5|Fluently multiply multi-digit whole numbers using the standard algo- rithm.|
|5.NBT.B.6|Find whole-number quotients of whole numbers with up to four-digit div- idends and two-digit divisors, using strategies based on place value, the properties of operations, and/or the relationship between multiplication and division. Illustrate and explain the calculation by using equations, rectangular arrays, and/or area models.|
|5.NBT.B.7|Add, subtract, multiply, and divide decimals to hundredths, using con- crete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used.|
|5.NF.A.1|Add and subtract fractions with unlike denominators (including mixed numbers) by replacing given fractions with equivalent fractions in such a way as to produce an equivalent sum or difference of fractions with like denominators.|
|5.NF.A.2|Solve word problems involving addition and subtraction of fractions referring to the same whole, including cases of unlike denominators, e.g., by using visual fraction models or equations to represent the problem. Use benchmark fractions and number sense of fractions to estimate mentally and assess the reasonableness of answers.|
|5.NF.B.4|Apply and extend previous understandings of multiplication to multiply a fraction or whole number by a fraction.|
Table 10: CC Standards for Grade 6
|Standard ID|Description|
|---|---|
|6.NS.B.2|Fluently divide multi-digit numbers using the standard algorithm.|
|6.NS.B.3|Add, subtract, multiply, and divide decimals to hundredths, using con- crete models or drawings and strategies based on place value, properties of operations, and/or the relationship between addition and subtraction; relate the strategy to a written method and explain the reasoning used.|
|6.EE.A.1|Write and evaluate numerical expressions involving whole-number ex- ponents.|
|6.EE.B.7|Solve real-world and mathematical problems by writing and solving equations of the form x + p = q and px = q for cases in which p, q and x are all nonnegative rational numbers.|
Table 11: CC Standards for Grade 7
|Standard ID|Description|
|---|---|
|7.NS.A.1- fraction|Apply and extend previous understandings of addition and subtraction to add and subtract rational numbers; represent addition and subtraction on a horizontal or vertical number line diagram.|
|7.NS.A.1- decimal|Apply and extend previous understandings of addition and subtraction to add and subtract rational numbers; represent addition and subtraction on a horizontal or vertical number line diagram.|
|7.NS.A.2|Apply and extend previous understandings of multiplication and division and of fractions to multiply and divide rational numbers.|
|7.NS.A.3- fraction|Solve real-world and mathematical problems involving the four opera- tions with rational numbers.|
|7.NS.A.3- decimal|Solve real-world and mathematical problems involving the four opera- tions with rational numbers.|
-----
Table 12: CC Standards for Grade 8
|Standard ID|Description|
|---|---|
|8.EE.A.2|Use square root and cube root symbols to represent solutions to equations of the form x² = p and x³ = p, where p is a positive rational number. Evaluate square roots of small perfect squares and cube roots of small perfect cubes. Know that the square root of 2 is irrational.|
|8.EE.C.7|Solve linear equations in one variable.|
|8.EE.C.8|Analyze and solve pairs of simultaneous linear equations.|
|Model|GPT4-generated MathCAMPS accuracy Claude-generated MathCAMPS accuracy|
|---|---|
Table 13: Performance of GPT-4o and Claude 3 Opus on the dataset genreated using Claude
|GPT-4o Claude 3 Opus|0.910 0.954 0.887 0.909|
|---|---|
**C** **Data generation pipeline details**
**C.1** **Grammar**
We implemented a global attribute grammar in Python, where production rules are implemented as
recursive Python functions. Effectively, each CC standard has its own grammar, composed of pieces
from components from the global CC grammar, as well as possibly adding unique non-terminals.
Each CC standard contains the following parameters:
**Description: The description of the CC standard.**
**Short description: A shortened description of the CC standard.**
**Filters: A list of problem filters to ensure that all problems in this standard satisfy some requirement**
given in the Common Core description of the standard. The ProblemLength filter makes
sure that the problem is within the desired length. CheckIntermediateValues filters out
any problems with intermediate values greater or lesser than max_value or min_value,
respectively. The ChainsOfVariables filter eliminates any problems where variables are
assigned to equal exactly another variable, and nothing else. The ContainsTen filter checks
if the math word problem contains numbers adding up to 10, or contains a 10 in the problem
(for standards K.OA.A.4 and K.NBT.A.1, respectively).
**Transforms: List of problem transformations applied to all symbolic structures from this standard.**
The NoUselessVariables transform performs dead code elimination — it removes any
variables that do not contribute to the final answer by applying a simple graph reachability
algorithm on a dependency graph between statements, removing statements that the answer
does not depend on. The Simplify transform essentially inlines variables that are used only
once.
**Expressions: Lists non-terminals available to generate expressions in symbolic structures for this**
standard. For example, this can make specific binary operations (e.g. addition, division)
available on that particular standard.
**Min/max value: Specifies bounds on values for both the final answer and all intermediate values in**
the solution.
**Min/max number: Specifies bounds on numeric constants sampled in the symbolic structure.**
**Max depth: Sets a maximum depth for expressions in the symbolic structure.**
**Samples: We include 2+ hand-written, standard-relevant examples of a symbolic problem followed**
by a relevant natural language problem generation, which we use as few-shot prompts during
problem generation. We also use these prompts, but in reverse (natural language followed
by symbolic problem), when we prompt GPT-4 during cycle consistency.
-----
**C.2** **Answer Grading During Evaluation**
Given a solution in natural language, we first use a rule-based answer extractor to extract any model’s
numerical answer. In cases where a language model doesn’t answer in the required format, or
answers in an unexpected format, the answer is initially marked as incorrect. For all problems with
incorrect answers, we use Llama-3 70B to re-extract the final answer. We few-shot prompt it with
hand-generated examples of solutions and extracted final answers, and ask it to extract the final
answer from the new solution. If a problem that was previously incorrect is marked as correct (given
the newly extracted answer), we rerun the model on any followups the problem might have. Note
that this “regrading” step can only improve accuracy from the base result, since we only run it on
problems that failed under the rule-based evaluation. In practice, we found this process to have
negligible false-positive rate — only in a handful of cases across all models we observed either
answer extraction processes extracting the correct answer out of a wrong response (e.g., if the answer
to a problem is 2, and the model responds “On day 2, Sally bought 9 dolls”, the rule-based parser
extracts 2 as being the model’s answer, though the sentence implies its answer to be 9). On the other
hand, the LLaMA-3 70B extractor greatly reduces our false negative rate in a handful of models
(especially DeepSeek) which are more likely to respond in a format different from what our prompt
asks for.
**C.3** **Cost estimate**
All problems in MathCAMPS were generated using OpenAI gpt-4-0613, in May 2024. We estimate
an approximate cost of 330 USD to generate 9607 problems (including main problems and followups). This includes the cost to perform cycle consistency, and problems that are discarded by cycle
consistency. This gives an average cost of 0.034 USD (3.4 cents) per cycle-consistent problem or
follow-up question.
**D** **Correlation between MathCAMPS and GSM8k**
Figure 3 shows accuracies of several models on both GSM8k and MathCAMPS, along with the
line of best fit. There is a strong correlation between overall accuracy in both datasets (ρ = 0.91,
_p < 10[−][6]), though MathCAMPS allows for many fine-grained analysis besides overall performance._
**E** **Largest Model Rank Changes When Focusing on One CC Standard**
**(Complete Table)**
Table 14 shows the full table from which Table 2 was extracted.
**F** **Followup Analysis**
Table 15 lists model accuracies when only looking at the main problems (Main Acc.), their accuracies
when only looking at the incremental followups (IFUP Acc.), their accuracies when only looking
at the counterfactual followups (CFUP Acc.), and finally, the total number of followups seen by
each model. The total number of followups a model sees relies on whether or not they get the main
question for that followup correct. If a model does not correctly solve the main question, it is not
prompted with follow-ups. Note that each followup serves as a followup to the main question, as
opposed to a followup to each other.
-----
**Model** **Top outlier skill** **Rank change**
GPT-4o 8.EE.C.8 - Solve two-variable systems (1[st] 22[th])
Claude-3 Opus 2.MD.B.5 - Add/sub within 100 (2[nd] 13[th])
Gemini-1.5 Pro K.OA.A.4 - Adding to equal 10 (3[rd] 19[th])
Gemini-1.5 Flash 4.OA.B.4 - Factor pairs within 100 (4[th] 20[th])
GPT-3.5 Turbo 6.EE.A.1 - Evaluate exponents (5[th] 21[th])
Claude-3 Sonnet 2.NBT.B.5 - Add/sub within 100 (6[th] 12[th])
Claude-3 Haiku 3.OA.A.4 - Determine unknowns in mul/div probs (9[th] 1[st])
Llama 3 70B K.OA.A.4 - Adding to equal 10 (7[th] 17[th])
Mixtral 8x22B 8.EE.C.8 - Solve two-variable systems (8[th] 21[th])
DeepSeek 67B K.NBT.A.1 - Decompose into 10s (10[th] 1[st])
Llama 3 8B 4.NBT.B.4 - Add/sub multi-digit nums (11[th] 21[th])
Mixtral 8x7B 6.EE.A.1 - Evaluate exponents (12[th] 20[th])
Llemma 34B K.OA.A.4 - Adding to equal 10 (13[th] 1[st])
Mistral 7B 1.OA.A.1 - Add/sub within 20 (14[th] 21[th])
DeepSeek Coder 33B 6.EE.A.1 - Evaluate exponents (15[th] 3[rd])
CodeLlama 34B 5.NF.A.1 - Add/sub fractions (16[th] 22[th])
phi-2 K.OA.A.4 - Adding to equal 10 (17[th] 4[th])
Llemma 7B 6.EE.A.1 - Evaluate exponents (18[th] 5[th])
Gemma 7B K.OA.A.5 - Add/sub within 5 (19[th] 6[th])
CodeLlama 7B 8.EE.C.8 - Solve two-variable systems (21[th] 15[th])
Gemma 2B 8.EE.C.8 - Solve two-variable systems (22[th] 11[th])
Table 14: Largest changes in a model’s ranking when comparing its performance in a particular CC
standard, in contrast to only overall performance. This is a complete version of Table 2, which only
showed some models for brevity.
|Vendor Model|Main Acc. IFUP Acc. CFUP Acc. Total FUPs seen|
|---|---|
Table 15: Accuracy of each model on incremental follow-up questions (IFUP) as well as on counter
|Anthropic Claude-3 Opus Anthropic Claude-3 Sonnet Anthropic Claude-3 Haiku DeepSeek DeepSeek Coder 33B DeepSeek DeepSeek 67B EleutherAI LLemma 7B Google Gemini-1.5 Pro Google Gemini-1.5 Flash Google Gemma 2B Google Gemma 7B Meta Llama 3 8B Meta Llama 3 70B Meta CodeLlama 7B Meta CodeLlama 13B Meta CodeLlama 34B Microsoft phi-2 Mistral Mistral 7B Mistral Mixtral 8x7B Mistral Mixtral 8x22B OpenAI GPT-4o OpenAI GPT-3.5 Turbo|0.89 0.90 0.88 4142 0.86 0.86 0.87 3964 0.84 0.88 0.87 3819 0.65 0.79 0.85 1022 0.80 0.87 0.88 3286 0.62 0.68 0.80 2890 0.89 0.91 0.89 4140 0.87 0.89 0.87 4083 0.51 0.29 0.54 2044 0.62 0.55 0.60 2786 0.77 0.84 0.80 3476 0.85 0.87 0.84 3939 0.52 0.69 0.86 617 0.58 0.75 0.80 2451 0.64 0.82 0.88 844 0.63 0.48 0.78 2873 0.68 0.72 0.80 3090 0.76 0.80 0.82 3439 0.84 0.86 0.83 3948 0.92 0.92 0.90 4358 0.87 0.85 0.86 4063|
|---|---|
_factual follow-ups (CFUP). Note that these accuracies are not directly comparable, since models are_
only evaluated on follow-ups to problems that they respond correctly to; thus, each accuracy shown
here is over a different subset of follow-up problems in MathCAMPS.
-----
Figure 3: Relation between accuracy on GSM8k and on MathCAMPS.
-----
| [
"Gabriel, Poesia",
"Shubhra, Mishra",
"Belinda, Mo",
"Noah D., Goodman"
] | 2024-06-30T00:00:00 | null | false | 1 | 0 | null | http://arxiv.org/abs/2407.00900 | https://arxiv.org/abs/2407.00900 | https://www.semanticscholar.org/paper/4d77e472f2ac08b43309a726e7eb3f96f3e7e707 |
MathOdyssey: Benchmarking Mathematical Problem-Solving Skills in Large Language Models Using Odyssey Math Data | Large language models (LLMs) have significantly advanced natural language understanding and demonstrated strong problem-solving abilities. Despite these successes, most LLMs still struggle with solving mathematical problems due to the intricate reasoning required. This paper investigates the mathematical problem-solving capabilities of LLMs using the newly developed "MathOdyssey" dataset. The dataset includes diverse mathematical problems at high school and university levels, created by experts from notable institutions to rigorously test LLMs in advanced problem-solving scenarios and cover a wider range of subject areas. By providing the MathOdyssey dataset as a resource to the AI community, we aim to contribute to the understanding and improvement of AI capabilities in complex mathematical problem-solving. We conduct benchmarking on open-source models, such as Llama-3 and DBRX-Instruct, and closed-source models from the GPT series and Gemini models. Our results indicate that while LLMs perform well on routine and moderately difficult tasks, they face significant challenges with Olympiad-level problems and complex university-level questions. Our analysis shows a narrowing performance gap between open-source and closed-source models, yet substantial challenges remain, particularly with the most demanding problems. This study highlights the ongoing need for research to enhance the mathematical reasoning of LLMs. The dataset, results, and code are publicly available. | It is indicated that while LLMs perform well on routine and moderately difficult tasks, they face significant challenges with Olympiad-level problems and complex university-level questions, particularly with the most demanding problems. | ## MathOdyssey: Benchmarking Mathematical Problem-Solving Skills in Large Language Models Using Odyssey Math Data
**Meng Fang** [1] **Xiangpeng Wan** [2] **Fei Lu** [3] **Fei Xing** [4] **Kai Zou** [2][,][5]
1 2
University of Liverpool NetMind.AI
3 4
Johns Hopkins University Mathematica Policy Research
5
AGI Odyssey
```
[email protected]
```
**Abstract**
Large language models (LLMs) have significantly advanced natural language understanding and demonstrated strong problem-solving abilities. Despite these
successes, most LLMs still struggle with solving mathematical problems due to
the intricate reasoning required. This paper investigates the mathematical problemsolving capabilities of LLMs using the newly developed “MathOdyssey” dataset.
The dataset includes diverse mathematical problems at high school and university
levels, created by experts from notable institutions to rigorously test LLMs in
advanced problem-solving scenarios and cover a wider range of subject areas. By
providing the MathOdyssey dataset as a resource to the AI community, we aim
to contribute to the understanding and improvement of AI capabilities in complex mathematical problem-solving. We conduct benchmarking on open-source
models, such as Llama-3 and DBRX-Instruct, and closed-source models from the
GPT series and Gemini models. Our results indicate that while LLMs perform
well on routine and moderately difficult tasks, they face significant challenges
with Olympiad-level problems and complex university-level questions. Our analysis shows a narrowing performance gap between open-source and closed-source
models, yet substantial challenges remain, particularly with the most demanding
problems. This study highlights the ongoing need for research to enhance the
mathematical reasoning of LLMs. The dataset, results, and code are publicly
available.[1]
**1** **Introduction**
Large language models (LLMs) have demonstrated exceptional proficiency in mastering human
language and handling mathematical problems, including typical routine math problems [OpenAI,
2023, Touvron et al., 2023a, Reid et al., 2024]. In recent years, several benchmarks related to
mathematics have been proposed, such as the GSM8K dataset [Cobbe et al., 2021], the MATH dataset
[Hendrycks et al., 2021a] and so on. Recent LLMs and prompting approaches have addressed these
problems with notable success [OpenAI, 2023, Touvron et al., 2023a,b]. For instance, GPT-4, using
advanced prompting techniques [OpenAI, 2023], has achieved more than a 90% success rate on
GSM8K and 80% on MATH. These achievements indicate that LLMs possess remarkable capabilities
in mathematical reasoning.
The quest to improve LLMs’ mathematical problem-solving abilities is not just a demonstration of
technological advancement but a crucial step toward developing more general and capable artificial
[1https://mathodyssey.github.io/](https://mathodyssey.github.io/)
Preprint. Under review.
-----
**Olympiad-level**
**Problem: Let S = {1, 2, · · · 2024}, if the set of any n pairwise prime numbers in S has at**
least one prime number, the minimum value of n is .
**Answer: 16.**
**Reasoning: Taking the 15 numbers 1, 2[2], 3[2], ..., 43[2]. They violate the condition. Furthermore,**
since S does not contain any non-prime numbers with a minimum prime factor of at least
47 (because 47[2] _> 2024). Set 1 aside, there are only 14 types of non-prime numbers in S,_
classified by its minimum prime factor. Applying the Pigeonhole Principle, we conclude that
n = 16.
**High School**
**Problem: What are the solutions of the quadratic equation 15x[2]** = 2x + 8.
A)
_−_ 3[4] _[,][ −]_ [3]2
B)
_−_ 5[4] _[,][ 2]3_
C)
_−_ [3]2 _[,][ 4]5_
D)
_−_ [2]3 _[,][ 4]5_
**Answer: D**
**Reasoning: First move all terms to one side: 15x[2]** _−_ 2x − 8 = 0. Then factor into (5x −
4)(3x + 2) = 0. Setting 5x − 4 to zero results in a solution of x = [4]5 [and setting][ 3][x][ + 2][ to]
zero results in a solution of x = − 3[2] [.]
**University-level**
**Problem: Find the limit**
_f_ (2x[2] + x 3) _f_ (0)
lim _−_ _−_
_x→1_ _x −_ 1
given f _[′](1) = 2 and f_ _[′](0) = −1._
**Answer: −5.**
**Reasoning:xlim→1** _f_ (g(x))x−− Letf1 (g(1)) g. By the definition of the derivative and the chain rule and noting that(x) = 2x[2] + x − 3. Since g(1) = 0, the desired limit equals
_g[′](1) = 5, we have_
_f_ (g(x)) _f_ (g(1))
lim _−_ = f (g(1))g[′](1) = f (0)g[′](1) = ( 1)(5) = 5.
_x_ 1 _x_ 1 _[′]_ _[′]_ _−_ _−_
_→_ _−_
Table 1: MathOdyssey dataset examples. We demonstrate three distinct levels to challenge various
aspects of mathematical knowledge: Olympiad-level, High School, and University-level mathematics.
Each example consists of three parts: the problem, the answer, and the reasoning. Note that both
GPT-4 Turbo and Llama-3-70B are unable to solve the first Olympiad-level example. See Appendix
A for the LLMs’ solutions.
intelligence systems. On the one hand, this endeavor requires datasets that accurately measure and
challenge the AI’s mathematical reasoning beyond basic problems. Although their performance is
high on datasets like GSM8K [Cobbe et al., 2021], it remains uncertain how well they handle more
complex mathematical challenges, such as those found in university-level courses and competitive
high school mathematics. Performance may diminish significantly in these areas. This gap highlights
the ongoing need for enhanced mathematical reasoning capabilities in AI, a critical area for assessing
cognitive abilities akin to human intelligence. Moreover, a significant obstacle is that many existing
datasets might have been included in the training phases of these models, potentially skewing
performance metrics. Prominent examples include STEM-Q [Drori et al., 2023], GSM8K [Cobbe
et al., 2021], and the MATH dataset [Hendrycks et al., 2021a], which may no longer provide a true
test of an LLM’s mathematical capabilities. On the other hand, high-quality, expert-crafted original
problems are scarce. For instance, a study by OpenAI [Davis and Aaronson, 2023] included only 105
such problems in high school and university-level science and math.
To directly address these challenges, we introduce the “MathOdyssey” dataset, a rigorously curated
collection of 387 mathematical problems for evaluating the general mathematical capacities of LLMs.
See examples in Table 1. The MathOdyssey dataset is developed by the GAIC Math organization and
-----
features a spectrum of questions from Olympiad-level competitions, advanced high school curricula,
and university-level mathematics. Mathematics professionals, including high-school educators,
researchers, and university professors, crafted these problems under the invitation of the GAIC Math
organization. Their involvement ensures the dataset not only supports advanced AGI research but
also fosters necessary interdisciplinary collaboration.
Furthermore, we open-source the MathOdyssey dataset to facilitate its use in evaluating other LLMs.
The dataset has not been used for training by LLMs. We explore its utility in benchmarking the
advanced mathematical reasoning abilities of LLMs. By ensuring the originality and confidentiality
of the questions, we maintain the integrity and fairness of the assessments, providing a reliable tool
for advancing research into artificial general intelligence.
Our contributions are as follows:
- We introduce a new mathematical challenge that provides different levels of mathematical
problems and covers a wider range of subject areas.
- We open source the MathOdyssey benchmark dataset, a meticulously curated collection
of mathematical problems spanning various domains and levels, complete with natural
language solutions. This dataset is specifically designed to probe the reasoning abilities
of LLMs, offering a unique tool for assessing AI performance in complex mathematical
reasoning. Each question has an objective answer serving as ‘ground-truth’, allowing
for objective evaluation on the LLM outputs. In particular, the Open-Answer problems
emphasize the importance of detailed reasoning and solution.
- We conduct a comprehensive benchmark analysis using our dataset on both open-source and
closed-source LLMs. Our findings reveal that while closed-source models currently lead,
open-source models are rapidly catching up, highlighting the competitive landscape of LLM
capabilities in mathematical problem-solving.
**2** **Related Work**
**Large Language Models for Mathematics.** Applying large language models (LLMs) to mathematical problems has led to significant strides, though solving such problems remains challenging
due to the need for highly complex and symbolic multi-step reasoning capabilities. Both GPT-3.5
and GPT-4 [OpenAI, 2023] have shown promising reasoning abilities for complex mathematical
tasks, such as those in the MATH dataset [Hendrycks et al., 2021b]. However, the performance of
open-source models, like Llama-1 and Llama-2 [Touvron et al., 2023a,b], is still far from satisfactory
in this domain. To enhance the mathematical problem-solving abilities of LLMs, prompt-based
methods have also been developed [Wei et al., 2022, Wang et al., 2022, Zhou et al., 2022]. These
methods aim to improve reasoning and accuracy by guiding the models through structured prompts
that help in breaking down complex problems into manageable steps.
**Mathematical Evaluation for Large Language Models.** Evaluating the mathematical capacity
of large language models (LLMs) is crucial. Benchmarks such as GSM8K [Cobbe et al., 2021],
which targets middle-school level mathematics, and MATH [Hendrycks et al., 2021b], which focuses
on high-school math competitions, have been widely used. For university-level problems, datasets
like ProofNet [Azerbayev et al., 2023a] and OCWCourses [Lewkowycz et al., 2022] are prominent.
Additionally, MiniF2F [Zheng et al., 2022] and AlphaGeometry [Trinh et al., 2024] provide Olympiadlevel problems, while the SAT dataset [Azerbayev et al., 2023b] includes problems from the College
Board SAT examination. These datasets have limitations, particularly at the undergraduate level and
above, where they fall short in addressing graduate-level and competition-level difficulties [Frieder
et al., 2024]. To address this gap, we introduce the MathOdyssey dataset, a diverse collection of
mathematical problems designed to serve as a rigorous benchmark for assessing both open-source
and closed-source models. Table 2 highlights the properties of MathOdyssey compared to relevant
benchmarks, emphasizing the different levels and the diversity of subject areas and question types
in our benchmark. This dataset spans a spectrum of difficulty levels, from high school to advanced
university mathematics, highlighting the evolving capabilities and ongoing challenges in LLM
mathematical problem-solving.
-----
Table 2: Comparison of existing evaluation datasets for testing AI in mathematics. These datasets are
|Dataset|Year|Description|# of Test|
|---|---|---|---|
|GSM8k [Cobbe et al., 2021]|2021|8.5k middle-school level math word problems|1k|
|MATH [Hendrycks et al., 2021a]|2021|12.5k high-school math competitions|5k|
|OCWCourses [Lewkowycz et al., 2022]|2022|University-level, MIT’s OpenCourseWare|272|
|MiniF2F [Zheng et al., 2022]|2023|Olympiad-level|488|
|SAT [Azerbayev et al., 2023b]|2023|Figureless questions from SAT|32|
|ProofNet [Azerbayev et al., 2023a]|2023|University-level, proofs|371|
|AlphaGeometry [Trinh et al., 2024]|2024|Olympiad Geometry only|30|
|MathOdyssey (this work)|2024|High School, University-level, Olympiad-level|387|
**Dataset** **Year** **Description** **# of Test**
GSM8k [Cobbe et al., 2021] 2021 8.5k middle-school level math word problems 1k
MATH [Hendrycks et al., 2021a] 2021 12.5k high-school math competitions 5k
OCWCourses [Lewkowycz et al., 2022] 2022 University-level, MIT’s OpenCourseWare 272
MiniF2F [Zheng et al., 2022] 2023 Olympiad-level 488
SAT [Azerbayev et al., 2023b] 2023 Figureless questions from SAT 32
ProofNet [Azerbayev et al., 2023a] 2023 University-level, proofs 371
AlphaGeometry [Trinh et al., 2024] 2024 Olympiad Geometry only 30
MathOdyssey (this work) 2024 High School, University-level, Olympiad-level 387
limited, especially in the availability of high-quality, expert-crafted original problems with varying
difficulty levels.
**3** **MathOdyssey**
To evaluate the mathematical reasoning abilities of LLMs, we create the MathOdyssey dataset, a
rigorously curated collection designed by professionals from both universities and high schools. To
ensure comprehensive evaluation and promote transparency, we have made the entire MathOdyssey
dataset and benchmarking code publicly available. This allows other researchers to replicate our
study, compare methods, and explore new approaches using the dataset.
**3.1** **Data Collection**
**Design Principle.** The motivation behind the design of the MathOdyssey dataset is to establish a
new benchmark representing the pinnacle of human intellectual achievement, encouraging researchers
to push the boundaries of LLMs’ mathematical reasoning capabilities. To realize this vision, we
have curated challenges that epitomize comprehensive levels of math problems. Specifically, our
benchmark includes:
- Inclusion of diverse levels of math problems: Ensuring a comprehensive understanding and
catering to various proficiency levels promotes a well-rounded mastery of mathematical
concepts and problem-solving skills. This dataset offers a range of problems, starting
from basic concepts and gradually increasing in difficulty to cover advanced topics. This
allows for a thorough evaluation of AI capabilities across various levels of high school and
university mathematics.
- Inclusion of different subject area problems: Enhancing LLMs’ mathematical proficiency
by exposing them to a wide range of concepts and techniques, from foundational arithmetic
to advanced topics such as algebra, number theory, geometry, combinatorics, and calculus.
These diverse subject areas help identify LLMs’ strengths and areas for improvement,
encouraging the development of critical mathematical reasoning, problem-solving skills,
and a deeper appreciation for the interconnected nature of mathematics. By integrating
various mathematical disciplines, researchers can create a more engaging and comprehensive
learning environment that prepares LLMs for complex real-world challenges in mathematics.
- Provision of objective answers and detailed solutions: The objective answers serve as
‘ground-truth’, allowing for objective evaluation of the LLM outputs. In particular, the
Open-Answer problems emphasize the importance of detailed reasoning and solution. Given
the varying difficulty and subject areas of these problems, which may exceed comprehension
without a specialized background in mathematics, each problem is accompanied by expertly
crafted solutions detailing the reasoning steps involved. These solutions are useful for
evaluation and can enhance the assessment of LLMs’ reasoning processes.
**Human professionals.** The dataset was created by human professionals to ensure high quality.
Experts developed a wide range of mathematical problems for the MathOdyssey dataset, featuring
a spectrum of questions from Olympiad-level competitions, advanced high school curricula, and
university-level mathematics. Mathematics professionals, including high-school educators, university
professors, and researchers, crafted these problems. Their involvement ensures the dataset not only
supports advanced AGI research but also fosters necessary interdisciplinary collaboration.
-----
Statistics – University-level – 4.39% (17)
Probability – University-level – 5.43% (21)
Algebra – Olympiad-level – 21.19% (82)
Differential Equations – University-level – 3.62% (14)
Calculus and Analysis – University-level
– 6.20% (24)
Number Theory – Olympiad-level – 1.03% (4)
Linear Algebra and Abstract Algebra
Geometry – Olympiad-level – 6.46% (25) – University-level – 6.46% (25)
Combinatorics – Olympiad-level – 9.56% (37) Pre-Calculus – High School – 14.21% (55)
Geometry – High School – 3.62% (14)
Algebra – High School – 17.83% (69)
Figure 1: Mathematical problems across educational levels. We curate and categorize problems by
difficulty and subject area.
A typical problem in the MathOdyssey dataset comprises three components: the problem, the answer,
and the reasoning, as detailed in Table 1. The problems are original and not sourced from previous
datasets or textbooks. Each problem is accompanied by an answer and a detailed solution that explains
the reasoning process used to derive the answer. After creation, the problems undergo independent
review by a separate team of researchers with expertise in mathematics. This team assesses the
problems and their solutions, eliminating any ambiguous or redundant responses to enhance the set’s
validity and reliability. This rigorous process guarantees the quality and dependability of the final
problem set.
**3.2** **Dataset Analysis**
To understand the properties of the MathOdyssey dataset, we analyze the questions and answers.
Specifically, we explore (i) the difficulty of questions based on the type of reasoning required to
answer them, (ii) the subject areas of the problems, and (iii) the diversity of answer types.
**Difficulty of questions.** In the MathOdyssey dataset, each category is designed to evaluate different
facets of mathematical reasoning and problem-solving capabilities, ranging from fundamental high
school concepts to complex university-level theories, as summarized in Figure 1. This diverse dataset
is structured into three distinct levels to challenge various aspects of mathematical knowledge:
- Olympiad-level: It tests advanced problem-solving skills with questions in Algebra, Number
Theory, Geometry, and Combinatorics.
- High School: Broadening the scope, this category includes problems in Algebra, Geometry,
and Pre-Calculus, covering a comprehensive range of high school math concepts.
- University-level: Catering to higher education, this segment offers challenges in Linear and
Abstract Algebra, Calculus and Analysis, Differential Equations, Probability, and Statistics,
suitable for university students.
The MathOdyssey dataset categorizes mathematical problems across different educational levels,
helping to understand the distribution and scope of problems included in the dataset. For Olympiadlevel Competition, the categories and their respective percentages are Algebra (21.19%), Number
Theory (1.03%), Geometry (6.46%), and Combinatorics (9.56%), totaling 38.24%. For High School
Mathematics, the categories are Algebra (17.83%), Geometry (3.62%), and Pre-Calculus (14.21%),
totaling 35.66%. For University-level, the categories are Linear and Abstract Algebra (6.46%),
Calculus and Analysis (6.20%), Differential Equations (3.62%), Probability (5.43%), and Statistics
(4.39%), totaling 26.10%. Three subject areas, Differential Equations, Probability, and Statistics,
only appear at the University level.
**Subject areas of the problems.** The problems encompass a wide range of topics, including Algebra,
Number Theory, Geometry, Combinatorics, Pre-Calculus, Linear and Abstract Algebra, Calculus and
-----
**Open-Answer: Let S = {1, 2, · · · 2024}, if the set of any n**
pairwise prime numbers in S has at least one prime number,
True-False (16) the minimum value of n is ____________.
Open-Answer
(244)
**Multi-Choice: Find the solution of 4(3y − 5) = 2(7y + 3)**
A) − 13
B) − 4
C) 11/2
D) 13
Multiple-Choice
(127) **True-False: A sample of 30 observations yields a sample**
mean of 50. Assume the population standard deviation is
known to be 10. When testing the hypothesis that the
population mean is 45 at the 5% significance level, should we
accept the hypothesis?
Answer Types Examples
Figure 2: There are three answer-types: True-False questions, Multiple-Choice questions and OpenAnswer questions.
Analysis, Differential Equations, Probability, and Statistics, as shown in Figure 1. The MathOdyssey
dataset encompasses a wide range of subject areas, providing a comprehensive testing ground for
the mathematical reasoning and problem-solving capabilities of large language models (LLMs).
Algebra problems constitute 21.19% from Olympiad-level Competition and 17.83% from High
School Mathematics, making them the most represented areas in the dataset. In contrast, Number
Theory problems, with only 1.03% from Olympiad-level Competition, have the lowest representation.
Pre-Calculus problems, accounting for 14.21% of High School Mathematics, play a significant role
in preparing students for more advanced calculus topics. Other subject areas, including Calculus
and Analysis, Linear and Abstract Algebra, Differential Equations, Probability, and Statistics, each
contribute around 4% to 8% to the dataset. See Appendix B for examples that help better understand
the reasoning required to answer the questions.
**Diversity of answer types.** The MathOdyssey dataset includes a variety of answer types, providing
a comprehensive assessment of the mathematical reasoning and problem-solving capabilities of large
language models (LLMs). The distribution of answer types is shown in Figure 2, and it is categorized
into three main types: True-False questions, Multiple-Choice questions, and Open-Answer questions.
The distribution of answer types in the MathOdyssey dataset is designed to provide a well-rounded
evaluation of LLMs’ mathematical capabilities. With 63.0% of the questions being open-answer, the
dataset emphasizes the importance of detailed reasoning and solution generation. Multiple-choice
questions, making up 32.8%, help assess the models’ ability to choose correct answers from given
options, while true-false questions, at 4.1%, provide a quick check of fundamental understanding.
This diverse mix of answer types ensures that LLMs are tested on various aspects of mathematical
problem-solving, from basic validation to complex reasoning and solution generation, requiring an
understanding of the concepts.
**4** **Experiments**
Our goal is to provide a comprehensive standardized dataset to evaluate LLMs on mathematical
reasoning. By comparing different models, our benchmarks highlight their strengths and weaknesses.
**4.1** **Models**
We evaluate both open-source and closed-source LLMs. The models tested include GPT-4 Turbo,
GPT-4 [OpenAI, 2023], GPT-3.5 Turbo, Gemini models [Reid et al., 2024], Claude 3 [Anthropic,
2024], Llama-3-70B, and DBRX-Instruct [Mosaic, 2024]. All models are tested using chain-ofthought reasoning [Wei et al., 2022]. See Appendix C for details of the baselines and prompts.
-----
**4.2** **Model Evaluation**
A key advantage of the MathOdyssey data is that every question has an objective answer, so that it is
straightforward to check the correctness by code. Such objective answers avoid subjective judgments
from humans, making the evaluation consistent and reliable.
We use GPT-4 to assist in evaluating model accuracy, particularly for open-answer questions. The
metric measures the similarity between the predicted and ground truth answers. In the MathOdyssey
dataset, various types of questions and answers are included. We employ a prompt-based method to
provide scores for evaluation, considering the following criteria:
- Mathematical Equivalence: Verify answers based on mathematical equivalence using advanced tools like symbolic computation software to confirm the equivalence of different
algebraic or symbolic expressions.
- Scoring: Assign a score of ‘1’ for answers that match or are equivalent to the provided
solution (exact value, choice label, or correctly rounded numerical approximation). Assign
a score of ‘0’ for incorrect answers without providing explanatory feedback.
- Handling Multiple Choices: Consider the answer correct if the student correctly identifies
the choice that matches the solution. Also, treat the corresponding choice as correct if the
student provides the exact value that aligns with the problem’s context.
- Numerical Equivalence: Accept numerical answers that are correct to at least two decimal
places or more, depending on the required precision.
- Symbolic and Algebraic Identities: Recognize and accept equivalent algebraic forms as
correct, such as standard mathematical identities.
- Trigonometric and Logarithmic Forms: Accept equivalent trigonometric and logarithmic
expressions, acknowledging transformations that change the form but not the value.
- Comprehensive Evaluation: Encourage the use of computational tools for checking equivalence in cases where expressions are too complex for straightforward visual inspection.
See Appendix D for the requirements and prompts used in the evaluation method.
**4.3** **Results and Analysis**
We first report the performance on our mathematical benchmarks, as shown in Table 3. Our observations indicate that the benchmark is challenging for these models, with overall performance
below 60%.[2] The Gemini Math-Specialized 1.5 Pro exhibits the highest overall performance at
55.8%, suggesting that specialized training significantly enhances capabilities. GPT-4 Turbo achieves
47.03%, followed by Gemini 1.5 Pro at 45.0%, and Claude 3 Opus at 40.6%, all showing competitive
performance. For closed-source models (specifically the GPT series) and state-of-the-art open-source
models such as Llama-3-70B and DBRX-Instruct, the results show that the selected open-source
models not only surpass the performance of GPT-3.5 but are also approaching the capabilities of
earlier versions of GPT-4.
When comparing different levels of mathematical problems for GPT models, we observe that High
School mathematics is the easiest category for all models, with GPT-4 models scoring above 70%.
Olympiad-level problems are the most difficult, with all models scoring below 11%. Similar trends
are seen for Llama-3-70B and DBRX-Instruct, with their performance in the Olympiad-level category
being even lower, at less than 10%.
Furthermore, closed-source models, particularly the GPT-4 Turbo, exhibit stronger performance in
high school and university-level math, highlighting ongoing advancements in their development. This
data underscores the rapid progression of closed-source models in handling increasingly difficult
mathematical questions over time. The performance gap between the best closed-source model,
GPT-4 Turbo, and the open-source Llama-3 for difficult mathematical problems is notably narrow. For
instance, GPT-4 Turbo achieves an overall accuracy of 10.14% in the Olympiad-level mathematics,
while Llama-3 achieves 9.46%. This demonstrates that both models, despite notable progress, still
2Advanced prompting methods using GPT-4 models in the contest have achieved performance improvements
between 60% and 70%.
-----
**Model** **Olympiad-level** **High School** **University-Level** **Overall**
GPT-4 Turbo 10.14% 84.78% 49.50% 47.03%
GPT-4 5.41% 74.64% 32.67% 37.21%
GPT-3.5 Turbo 2.03% 41.30% 15.84% 19.64%
Gemini
-1.5 Pro - - - 45.0 %
-Math-Specialized 1.5 Pro - - - 55.8 %
Claude 3 Opus - - - 40.6 %
Llama-3-70B 9.46% 52.17% 21.78% 27.91%
DBRX-Instruct 8.11% 42.75% 20.79% 23.77%
Table 3: Results for different LLMs. We use chain-of-thought reasoning for solving problems. The
performance of Gemini 1.5 Pro and Claude 3 Opus are quoted from the Gemini 1.5 report [Reid et al.,
2024]. Both GPT-4-Turbo and Gemini 1.5 Pro outperform the other models. For GPT-4-Turbo, we
use results based on gpt-4-turbo-2024-04-09. For GPT-4, we use results based on gpt-4-0613. For
GPT-3.5 Turbo, we use results based on gpt-3.5-turbo-0125.
**Category** **GPT-4 Turbo GPT-3.5 Turbo Llama-3-70B DBRX-Instruct**
**Olympiad-level:**
Algebra 8.54% 2.44% 8.54% 4.88%
Number Theory 0.00% 0.00% 25.00% 0.00%
Geometry 16.00% 4.00% 8.00% 20.00%
Combinatorics 10.81% 0.00% 10.81% 8.11%
**High School Mathematics:**
Algebra 88.41% 39.13% 44.93% 39.13%
Geometry 92.86% 71.43% 71.43% 57.14%
Pre-Calculus 78.19% 36.36% 56.36% 43.64%
**University-level:**
Differential Equations 71.43% 28.57% 50.00% 28.57%
Linear & Abstract Algebra 44.00% 16.00% 24.00% 28.00%
Calculus & Analysis 62.50% 16.67% 20.83% 12.50%
Probability 14.29% 9.52% 0.00% 4.76%
Statistics 64.71% 11.76% 23.53% 35.29%
Table 4: Results for different LLMs across various subject areas. Note that the results are used for
evaluating the LLMs by direct comparison and may be improved with different prompting methods.
face significant challenges in solving these complex problems. However, for other difficulty levels,
the gap becomes larger. For example, GPT-4 Turbo achieves 84.78% in high school mathematics,
while Llama-3-70B scores only 52.17%, a difference of more than 30%.
Table 4 presents the results for different LLMs across various subject areas. The results show that
GPT-4 Turbo consistently outperforms others across most categories, particularly in High School
Mathematics and University-Level subjects. It shows a notable lead in Algebra, Geometry, and PreCalculus at the high school level, and Differential Equations, Linear & Abstract Algebra, Calculus &
Analysis, and Statistics at the university level. GPT-3.5 Turbo shows consistent but lower performance
compared to GPT-4 Turbo. Llama-3-70B performs well in certain areas, particularly in Olympiadlevel problems. It has the highest score in Number Theory among all models. However, it struggles
significantly in Series and Probability. DBRX-Instruct shows strength in Olympiad-level Geometry
but generally lags behind GPT-4 Turbo and Llama-3-70B in other categories.
**5** **Conclusion**
We introduce MathOdyssey, a dataset for assessing LLMs’ mathematical problem-solving skills. Our
dataset, evaluation methods, and code are openly available. We have shown that while LLMs, both
open-source like Llama-3 and DBRX-Instruct, and closed-source such as the GPT series, demonstrate
proficiency in routine and moderately difficult mathematics, they struggle significantly with complex
-----
Olympiad-level problems. Additionally, we have revealed promising developments; open-source
models are beginning to approach the performance levels of earlier GPT-3.5 iterations. Despite this
progress, performance on the most challenging questions remains low, highlighting a clear gap that
future advancements need to address.
Ultimately, our research underscores the ongoing journey towards achieving human-like mathematical
reasoning in AI, with the MathOdyssey dataset serving as a benchmark for catalysing future developments. We are optimistic that continued research will progressively bridge the existing capability
gap. In the future, expanding the MathOdyssey dataset to include a wider range of problem types and
enhancing metrics to better capture deep mathematical reasoning can yield further insights into LLM
capabilities.
**Limitation. While the MathOdyssey dataset includes a variety of problems across different levels**
of mathematics, the questions may not cover all types of mathematical reasoning or problemsolving approaches. This limitation could affect how well the dataset generalizes to other forms of
mathematical challenges not represented in your collection.
**Future. To address generalizability limitations, future work involves expanding the dataset to**
include a wider range of mathematical topics and problem types, including those that require visual
representations, proofs, or interactive problem-solving.
**Acknowledgements**
We would like to extend our sincere gratitude to AGI Odyssey, the NGO responsible for organizing the
Global Artificial Intelligence Championships (GAIC) Math 2024. Their dedication and commitment
to promoting artificial intelligence education and innovation have been invaluable to the success of
this project. Additionally, we appreciate their contribution of resources and support, which have
played a significant role in making this initiative possible.
**References**
OpenAI. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée
Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand
Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language
models. arXiv preprint arXiv:2302.13971, 2023a.
Machel Reid, Nikolay Savinov, Denis Teplyashin, Dmitry Lepikhin, Timothy Lillicrap, Jean-baptiste
Alayrac, Radu Soricut, Angeliki Lazaridou, Orhan Firat, Julian Schrittwieser, et al. Gemini
1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv preprint
_arXiv:2403.05530, 2024._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168,
2021.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS,
2021a.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu,
Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn,
Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel
Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee,
Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra,
Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi,
Alan Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh
-----
Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen
Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic,
Sergey Edunov, and Thomas Scialom. Llama 2: Open foundation and fine-tuned chat models.
_arXiv preprint arXiv:2307.09288, 2023b._
Iddo Drori, Sarah Zhang, Zad Chin, Reece Shuttleworth, Albert Lu, Linda Chen, Bereket Birbo,
Michele He, Pedro Lantigua, Sunny Tran, et al. A dataset for learning university stem courses
at scale and generating questions at a human level. In Proceedings of the AAAI Conference on
_Artificial Intelligence, volume 37, pages 15921–15929, 2023._
Ernest Davis and Scott Aaronson. Testing gpt-4 with wolfram alpha and code interpreter plug-ins on
math and science problems. arXiv preprint arXiv:2308.05713, 2023.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. NeurIPS,
2021b.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
_neural information processing systems, 35:24824–24837, 2022._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models.
_arXiv preprint arXiv:2203.11171, 2022._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most prompting enables complex reasoning
in large language models. arXiv preprint arXiv:2205.10625, 2022.
Zhangir Azerbayev, Bartosz Piotrowski, Hailey Schoelkopf, Edward W. Ayers, Dragomir R. Radev,
and Jeremy Avigad. Proofnet: Autoformalizing and formally proving undergraduate-level mathematics. ArXiv, abs/2302.12433, 2023a.
Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski,
Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai
Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems
with language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,
editors, Advances in Neural Information Processing Systems, 2022.
Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. MiniF2F: A cross-system benchmark for
formal Olympiad-level mathematics, 2022.
Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He, and Thang Luong. Solving olympiad geometry
without human demonstrations. Nature, 625(7995):476–482, 2024. ISSN 0028-0836, 1476-4687.
doi: 10.1038/s41586-023-06747-5.
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q.
Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An Open Language Model For
Mathematics, 2023b.
Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz,
Philipp Petersen, and Julius Berner. Mathematical capabilities of chatgpt. Advances in Neural
_Information Processing Systems, 36, 2024._
Anthropic. The claude 3 model family: Opus, sonnet, haiku. 2024.
[Mosaic. Introducing dbrx: A new state-of-the-art open llm, March 2024. URL https://www.](https://www.databricks.com/blog/introducing-dbrx-new-state-art-open-llm)
```
databricks.com/blog/introducing-dbrx-new-state-art-open-llm.
```
-----
**A** **MathOdyssey dataset examples and solutions**
**Olympiad-level Problem. Let S = {1, 2, · · · 2024}, if the set of any n pairwise prime numbers in S**
has at least one prime number, the minimum value of n is .
**Answer: 16**
**Reasoning: Taking the 15 numbers 1, 2[2], 3[2], ..., 43[2]** violates the condition. Furthermore, since S
does not contain any non-prime numbers with a minimum prime factor of at least 47, there are only
14 types of non-prime numbers in S, excluding 1. Applying the Pigeonhole Principle, we conclude
that n = 16.
**GPT-4 Turbo’s Solution: Answer: 307**
Reasoning: The minimum number of elements required such that any subset of these elements from
the set S = {1, 2, ..., 2024} containing pairwise prime numbers includes at least one prime number is
307. This is based on the count of prime numbers up to 2024 (approximately 306) and the application
of the Pigeonhole Principle, which dictates that selecting more numbers than there are primes forces
the inclusion of a prime number or a violation of the pairwise prime condition.
**High School Problem. What are the solutions of the quadratic equation 15x[2]** = 2x + 8?
A) B) C) D)
_{−_ 3[4] _[,][ −]_ [3]2 _[}]_ _{−_ [4]5 _[,][ 2]3_ _[}]_ _{−_ 2[3] _[,][ 4]5_ _[}]_ _{−_ [2]3 _[,][ 4]5_ _[}]_
**Answer: D**
**Reasoning: First move all terms to one side: 15x[2]−2x−8 = 0. Then factor into (5x−4)(3x+2) = 0.**
Setting 5x − 4 to zero results in a solution of x = [4]5 [and setting][ 3][x][ + 2][ to zero results in a solution of]
_x = −_ 3[2] [.]
**GPT-4 Turbo’s Solution: Answer: D**
Reasoning: The quadratic equation was first rearranged to standard form, coefficients were identified,
and the quadratic formula was applied to find the solutions. The solutions were verified by substituting
back into the original equation.
**University-level Problem. Find the limit**
_f_ (2x[2] + x 3) _f_ (0)
lim _−_ _−_
_x→1_ _x −_ 1
given f _[′](1) = 2 and f_ _[′](0) = −1._
**Answer:** 5
_−_ _f_ (g(x)) _f_ (g(1))
**Reasoning: Let g(x) = 2x[2]** + x − 3. Since g(1) = 0, the desired limit equals limx→1 _x−−1_ .
By the definition of the derivative and the chain rule and noting that g[′](1) = 5, we have
_f_ (g(x)) _f_ (g(1))
lim _−_ = f (g(1))g[′](1) = f (0)g[′](1) = ( 1)(5) = 5.
_x_ 1 _x_ 1 _[′]_ _[′]_ _−_ _−_
_→_ _−_
**GPT-4 Turbo’s Solution: Answer: −5**
Reasoning: The limit was calculated by interpreting it as the derivative of a composed function,
applying the chain rule, and substituting the given derivative values.
**B** **MathOdyssey different subject areas**
Table 5 presents MathOdyssey examples spanning various subject areas. These encompass Algebra,
Number Theory, Geometry, Combinatorics, Pre-Calculus, Linear and Abstract Algebra, Calculus and
Analysis, Differential Equations, as well as Probability and Statistics.
**C** **Baselines and prompts**
Figure 3 depicts the prompt utilized for guiding Language Models (LLMs) in solving mathematical
problems within our experimental framework. This prompt distinctly outlines the system’s role as a
math professor, delineating task specifications and the anticipated output format for tackling intricate
mathematical challenges.
-----
Table 5: Examples of different subject areas.
|Subject Area|Example|
|---|---|
|Algebra|Let S = 1, 2, 2024, if the set of any n pairwise prime { · · · } numbers in S has at least one prime number, the minimum value of n is .|
|Number Theory|A natural number whose last four digits are 2022 and is divisible by 2003 has a minimum value of .|
|Geometry|In a cube ABCD −A 1B 1C 1D 1, AA = 1, E, F are the mid- 1 points of edges CC, DD, then the area of the cross-section 1 1 obtained by the plane AEF intersecting the circumscribed sphere of the cube is .|
|Combinatorics|If three points are randomly chosen from the vertices of a regular 17-sided polygon, what is the probability that the chosen points form an acute-angled triangle?|
|Pre-Calculus|In △ABC, AB = 10 cm, ∠B = 90◦, and ∠C = 60◦. Determine the length of BC. √ √ 10 3 A) 10 cm B) 10 3 cm C) cm D) 20 cm 3|
|Linear and Abstract Algebra|Find the solution [x, x, x ] to the following equations 1 2 3 ( x + 3x + 3x = 16, 1 2 3 3x + x + 3x = 14, 1 2 3 3x + 3x + x = 12. 1 2 3|
|Calculus and Analysis|Evaluate the following limit: p p 3 lim n2 + 2n 1 n2 + . − − n→∞|
|Differential Equations|Co √nsider the differential equation dd xy = xy. Find the value of y( 2) given that y(0) = 2.|
|Probability|Suppose that A, B, and C are mutually independent events and that P(A) = 0.2, P(B) = 0.5, and P(C) = 0.8. Find the probability that exactly two of the three events occur.|
|Statistics|Given the data set 3, 7, 7, 2, 5, calculate the sample mean µ and { } the sample standard deviation σ. Present the answer as [µ, σ].|
**D** **Evaluation**
Figure 4 depicts the prompt employed during the evaluation of large language models in our experiments. This prompt defines the system’s role as a math teacher, providing both assessment criteria and
the expected output format for grading mathematical problems. We have also made our evaluation
code accessible to the public.
-----
You are now assuming the role of a math professor. Your task is to assist the user by solving
complex mathematical problems in a detailed and step-by-step manner.
## Task Requirements:
1. **Detailed Problem Analysis**: Start by analyzing the given problem. Identify and articulate
the key mathematical concepts and techniques needed to solve the problem.
2. **Step-by-Step Solution**: Decompose the problem into manageable steps. Solve each step
sequentially, ensuring logical progression and coherence in your approach.
3. **Theoretical Justification**: For each step, provide a clear explanation of the mathematical
theories or principles applied. Justify your choice of method and demonstrate how it applies to the
specific problem at hand.
4. **Calculation Verification**: After solving each step, verify your calculations. Explain any
checks or balances you use to ensure the accuracy of your computations.
5. **Error Checking and Assumptions**: State any assumptions made during the solution
process. Discuss potential errors or alternative methods that could impact the solution.
6. **Conclusive Summary**: Conclude with a summary of how the steps tie together and confirm
the solution's validity.
## Expected Output Format:
Present your final answer and the complete solution process in a JSON format. This should
include:
- A `float` value or a mathematical algebraic expression for the answer.
- Detailed reasoning for each step of the solution.
Your output should be formatted as a JSON object enclosed in Markdown code blocks tagged
with 'json'. For example:
```json
{{
"answer": "<answer>",
"reasoning": "<detailed solution process>"
}}
```
Ensure that all task requirements are meticulously followed in your response.
Figure 3: Mathematical problem-solving prompts employed by LLMs.
-----
Assume the role of a math teacher tasked with evaluating student responses against the provided
solutions, which may include exact values, multiple-choice answers, or numerical approximations.
The question is provided as: {question}, the correct answer is provided as: {true}.
## Evaluation Criteria:
1. **Mathematical Equivalence**: Evaluate answers based on deep mathematical equivalence,
not just numerical accuracy. Use advanced tools or techniques to verify if different algebraic or
symbolic expressions are equivalent. Tools like symbolic computation software (e.g., Wolfram
Alpha, SymPy) should be used to confirm equivalences such as \\( \\frac{{\\sqrt{{6}}\\sqrt{{2}}}}{{2}} \\) being equivalent to \\( \\sqrt{{2 - \\sqrt{{3}}}} \\).
2. **Scoring**: Assign a score of '1' for any answer that matches or is equivalent to the provided
solution, whether it is an exact value, a choice label (e.g., A, B, C), or a correctly rounded
numerical approximation. Assign a score of '0' for incorrect answers. Do not provide any
explanatory feedback in your evaluation.
3. **Handling Multiple Choices**: If the solution provided is a choice (e.g., A, B, C, D, E, F) and
the student identifies this choice correctly, treat it as correct. If the solution is an exact value and
the student provides the corresponding choice that reflects this value correctly according to the
problem's context, also treat it as correct.
4. **Numerical Equivalence**: Treat numerical answers as equivalent if they are correct to at
least two decimal places or more, depending on the precision provided in the solution. For
instance, both 0.913 and 0.91 should be accepted if the solution is accurate within two decimal
places.
5. **Symbolic and Algebraic Identities**: Recognize and accept equivalent algebraic forms, such
as \\( \\sin^2(x) + \\cos^2(x) = 1 \\) or \\( e^{{i\\pi}} + 1 = 0 \\), as correct.
6. **Trigonometric and Logarithmic Forms**: Accept equivalent trigonometric and logarithmic
expressions, acknowledging identities and transformations that might alter the form but not the
value.
7. **Comprehensive Evaluation**: Encourage the use of computational tools to check for
equivalence in cases where expressions are too complex for straightforward visual inspection.
## Expected Output Format:
Present your final answer with a score of '1' or '0' only. Do not include any additional information
or feedback in your response.
Please evaluate the student's response with precision, utilizing computational resources as
necessary to ensure accurate and fair grading.
Figure 4: Evaluation prompts.
-----
| [
"Meng, Fang",
"Xiangpeng, Wan",
"Fei, Lu",
"Fei, Xing",
"Kai, Zou"
] | 2024-06-26T00:00:00 | null | false | 1 | 0 | null | http://arxiv.org/abs/2406.18321 | https://arxiv.org/abs/2406.18321 | https://www.semanticscholar.org/paper/30444cf822cb2b1c8ca6638997d08c3fb1a9c7ff |
MathScape: Evaluating MLLMs in multimodal Math Scenarios through a Hierarchical Benchmark | With the development of Multimodal Large Language Models (MLLMs), the evaluation of multimodal models in the context of mathematical problems has become a valuable research field. Multimodal visual-textual mathematical reasoning serves as a critical indicator for evaluating the comprehension and complex multi-step quantitative reasoning abilities of MLLMs. However, previous multimodal math benchmarks have not sufficiently integrated visual and textual information. To address this gap, we proposed MathScape, a new benchmark that emphasizes the understanding and application of combined visual and textual information. MathScape is designed to evaluate photo-based math problem scenarios, assessing the theoretical understanding and application ability of MLLMs through a categorical hierarchical approach. We conduct a multi-dimensional evaluation on 11 advanced MLLMs, revealing that our benchmark is challenging even for the most sophisticated models. By analyzing the evaluation results, we identify the limitations of MLLMs, offering valuable insights for enhancing model performance. | A new benchmark that emphasizes the understanding and application of combined visual and textual information is proposed, MathScape, designed to evaluate photo-based math problem scenarios, assessing the theoretical understanding and application ability of MLLMs through a categorical hierarchical approach. | ## MathScape: Evaluating MLLMs in multimodal Math Scenarios through a Hierarchical Benchmark
**Minxuan Zhou[1*], Hao Liang[2*], Tianpeng Li[3], Zhiyu Wu[3], Mingan Lin[3], Linzhuang Sun[4], Yaqi**
**Zhou[3], Yan Zhang[3], Xiaoqin Huang[3], Yicong Chen[3], Yujing Qiao[3], Weipeng Chen[3], Bin Cui[2],**
**Wentao Zhang[2][†], Zenan Zhou[3†]**
1Nankai University 2Peking University 3Baichuan Inc. 4University of Chinese Academy of Sciences
[email protected], [email protected], [email protected], [email protected]
**Abstract**
With the development of Multimodal Large Language Models (MLLMs), the evaluation of multimodal models in the
context of mathematical problems has become a valuable research field. Multimodal visual-textual mathematical reasoning serves as a critical indicator for evaluating the comprehension and complex multi-step quantitative reasoning abilities of MLLMs. However, previous multimodal math benchmarks have not sufficiently integrated visual and textual information. To address this gap, we proposed MathScape, a new
benchmark that emphasizes the understanding and application of combined visual and textual information. MathScape
is designed to evaluate photo-based math problem scenarios,
assessing the theoretical understanding and application ability of MLLMs through a categorical hierarchical approach.
We conduct a multi-dimensional evaluation on 11 advanced
MLLMs, revealing that our benchmark is challenging even
for the most sophisticated models. By analyzing the evaluation results, we identify the limitations of MLLMs, offering valuable insights for enhancing model performance. The
code is made available https://github.com/PKU-BaichuanMLSystemLab/MathScape.
**1** **Introduction**
In recent years, there have been advancements in large language models (LLMs) (OpenAI 2023a; Touvron et al. 2023)
and MLLMs (Zhao et al. 2023; Wu et al. 2023; Bai et al.
2024). They have shown strong understanding ability among
different modalities (Liu et al. 2023b; Bai et al. 2023b).
Among Multimodal Large Language Models (MLLMs), Vision Language Large Models (VLLMs) have demonstrated
competitive performance in traditional multimodal tasks, including image classification (Chen et al. 2024), image understanding (Li et al. 2023b,c), and image captioning (Bai
et al. 2023b). Furthermore, their advanced language understanding capabilities contribute to strong performance
in text-rich tasks, such as visual question answering (Liu
et al. 2023b,a) and image-text retrieval (Chen et al. 2024).
Recently, VLLMs have also shown significant progress in
solving mathematical problems. Therefore, comprehensive
*These authors contributed equally.
†Corresponding author.
Copyright © 2025, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
**Easy**
**#206** **Hard**
**594** **Middle**
**Middle** **232** **#182**
**#339**
**Math**
**Middle School** **MathPrimary**
**Easy**
**Hard**
**High School**
**Math**
**Hard**
**#49**
**499**
**Middle#232**
**Easy**
**#217**
Figure 1: MathScape offers a comprehensive collection of
math problems from primary school to high school. The
problems range in difficulty from easy to difficult, catering
to various levels of evaluation.
benchmarks are essential to evaluate the mathematical abilities of VLLMs. Although several benchmarks, such as
MATH-V (Wang et al. 2024a), MathVerse (Zhang et al.
2024), and MathVista (Lu et al. 2023b), have been developed to assess the mathematical capabilities of VLLMs.
They primarily focus on a combination of text math problems and image figures. Also, they only use simple metrics
and lack effective evaluation for complex or extended responses. Consequently, they face two key challenges:
**C1. Insufficient Real-World Data. In previous datasets**
like MATH-V (Wang et al. 2024a), MathVerse (Zhang et al.
2024), and MathVista (Lu et al. 2023b), the mathematical
description was typically provided as text input, while the
image contained only figures. This approach doesn’t align
well with real-world scenarios, where both the mathematical description and figures are captured together in a single
-----
novel research direction. Mathematical reasoning is a crucial
indicator for evaluating the ability of LLMs to perform complex, multi-step reasoning and quantitative analysis within
visual contexts. Below, we highlight some relevant work and
the latest developments in this area.
**2.1** **Benchmark for Mathematical Evaluation**
Recent research has seen significant advancements in mathematical reasoning benchmarks aimed at evaluating mathematical abilities. In this summary, we review both pure text
and multimodal math benchmarks.
**Pure Text Benchmarks** GSM8K (Cobbe et al. 2021) is
a dataset from OpenAI that includes 8.5K high-quality elementary school math word problems, each requiring 2 to
8 steps to solve. These problems primarily involve basic
arithmetic operations such as addition, subtraction, multiplication, and division. MATH (Hendrycks et al. 2021) offers a dataset of 12,500 problems sourced from high school
math competitions. SuperCLUE-Math (Xu et al. 2024a) is
a Chinese benchmark for multi-step reasoning in mathematics, containing over 2,000 problems that require multistep reasoning and offer natural language solutions. MathBench (Liu et al. 2024b) includes 3,709 math problems ranging from basic arithmetic to college-level questions, covering multiple difficulty levels.
All these benchmarks focus exclusively on text-based
mathematical tasks. They are designed to evaluate the mathematical capabilities of LLMs through specialized problem
sets.
**Multimodal Benchmarks** With the rapid advancement of
MLLMs, several high-quality benchmarks have emerged to
evaluate mathematical problem-solving in visual contexts.
MathVista (Lu et al. 2023b) focuses on visual math QA
tasks, assessing model performance across various math domains, such as arithmetic and algebra, using visual scenarios. MATH-V (Wang et al. 2024a) is another benchmark that targets multimodal mathematical understanding,
with questions primarily sourced from math competitions.
MathVerse (Zhang et al. 2024) evaluates MLLMs’ comprehension of visual diagrams using CoT (Chain of Thought)
strategies on 2,612 multimodal math problems. CMMU (He
et al. 2024) is a large-scale Chinese benchmark for multidisciplinary, multimodal understanding, featuring questions
from college exams and textbooks.
Compared to these existing multimodal mathematical
benchmarks, which often have limitations in question
length, complexity, and openness to model answers, our
MathScape benchmark is designed to be longer and more
open-ended.
**2.2** **MLLMs for Mathematics**
**Commonly Used VLLMs** The integration of visual
knowledge into LLMs has become a pivotal area of research
due to the rapid advancements in LLMs. VLLMs combine
vision information from vision encoders with LLMs, thus
enabling these models to process and interpret visual inputs
for various visual tasks (Liu et al. 2023c; Zhang et al. 2022;
**Key Points:**
1.Identify the corresponding
angles and sides.
2.Apply the correct theorems
Figure 2: An problem example of MathScape. Examples in
MathScape are represented by images taken by humans, ensuring a more realistic scenario. Each example will contain
a correct answer.
image.
**C2. Absence of Effective Evaluation Metrics. In previ-**
ous datasets (Wang et al. 2024a; Zhang et al. 2024; Lu et al.
2023b), the evaluation was limited to short answers, lacking
the ability to assess long-form responses.
To address these issues, we implement a three-step
pipeline for constructing a real-world math image dataset.
As illustrated in Figure 3, the process begins by converting
math documents into images, as shown in Figure 2. Next,
we capture photos and screenshots to build the dataset. Finally, we perform a thorough review and knowledge classification to ensure the dataset’s high quality. For evaluation, we design a two-step pipeline specifically for assessing longer math problems. First, we use LLMs to extract answers for each subproblem. Then, we employ LLMs as evaluators to assess the correctness of each solution. With the
data construction and evaluation pipeline, we constructed
MathScape, a new multimodal dataset that combines photos of real-world math problems with their correct answers.
The core contributions are summarized as follows:
- New Perspective: To the best of our knowledge, we
are the first to construct images that combine both figures and mathematical text descriptions, closely mirroring real-world scenarios.
- New Method: We propose a novel three-step dataset
construction pipeline, as illustrated in Figure 3. Additionally, we introduce a new two-step evaluation method
specifically designed for assessing long answers.
- New Benchmark: We present MathScape, a new multimodal mathematical dataset that spans various difficulty
levels, question types, and knowledge areas, providing
a comprehensive tool to evaluate the mathematical capabilities of MLLMs. Moreover, MathScape is entirely
original, consisting of previously unreleased multimodal
mathematical data.
**2** **Related Work**
In the field of MLLMs, the benchmark for multimodal mathematical reasoning capability represents a significant and
-----
**3.2** **Multidimensional Evaluation**
To comprehensively evaluate the performance of VLLMs,
we designed multiple dimensions to classify and assess their
mathematical abilities across various categories. The classification types we used are as follows:
**Question Types:** We first categorized the test questions
into different types, such as multiple-choice, fill-in-theblank (Solution), and proof questions, to examine the
model’s performance across various question formats.
**Knowledge Points:** We also classified the questions based
on mathematical knowledge areas, including algebra, geometry, probability, and statistics, to assess the model’s proficiency in different domains of mathematics.
**Educational Stages:** Additionally, the questions were divided according to the educational stage—primary school,
middle school, and high school—to evaluate the model’s
adaptability and accuracy at different levels of education.
**3.3** **Evaluation Method**
We utilize a two-step evaluation process to effectively score
long answers.
**Answer Segmentation:** As illustrated in Figure 4, we
prompt the LLMs to decompose a lengthy answer into multiple sub-answers, each one focusing on a specific aspect of
the problem. This segmentation ensures that the complex answer is broken down into manageable components, making
it easier to evaluate the correctness and relevance of each
part. By isolating sub-problems within the overall solution,
we can achieve a more granular analysis of the model’s performance.
**Sub-Answer Scoring:** After segmenting the long answer,
we employ the prompt depicted in Figure 12 to automatically score each sub-answer individually. This method allows us to evaluate the accuracy of each component independently, ensuring that the final score reflects the model’s
ability to handle various aspects of the problem comprehensively. By scoring sub-answers separately, we can identify
specific areas where the model excels or struggles, providing deeper insights into its strengths and weaknesses.
**3.4** **Dataset Statistics**
In this section, we provide a summary of the statistics for
our MathScape dataset. The dataset primarily consists of
Chinese image-text problems, along with question labels,
attribute information, problem-solving processes, and standard reference answers. Detailed statistics are presented in
Figure 5.
As shown in Figure 5(a), our dataset thoughtfully incorporates the characteristics of multimodal image-text questions. A significant portion of the questions are geometric,
which often require the integration of images for effective
problem-solving. In contrast, topics like equations and inequalities are less represented, aligning more closely with
the specific demands of multimodal assessment.
Figure 5(b) illustrates that our dataset primarily includes
solution questions and multiple-choice questions, with fewer
Li et al. 2022b) with enhanced accuracy and efficiency.
Pioneering frameworks like CLIP (Radford et al. 2021)
leverage contrastive learning on expansive image-caption
datasets to align modalities, forming the groundwork for
cross-modal comprehension. Various adapters (Liu et al.
2023b,a; Li et al. 2023b, 2022a; Jian, Gao, and Vosoughi
2023; Lu et al. 2023a) are introduced to further integrate different modalities. For example, LLaVA (Liu et al. 2023b,a)
employs a straightforward MLP to inject the vision information into LLMs. Whereas more complex implementations
like the Q-Former in BLIP (Li et al. 2022a, 2023b) utilize
cross-attention to enhance modality integration.
Recent studies (Wang et al. 2024b; Chen et al. 2023; Liu
et al. 2023b,a; Li et al. 2023a) aims to boost VLLM performance by focusing on the quality of both pre-training and
fine-tuning datasets. Models like LLaVA (Liu et al. 2023b,a)
and ShareGPT4V (Chen et al. 2023) have shown remarkable advancements in understanding and following complex
instructions through instruction tuning.
**VLLMs Designed for Math Problems** MAmmoTH (Yue
et al. 2023) InternLM-Math (Ying et al. 2024), and
ChatGLM-Math (Xu et al. 2024b) are multimodal models
specifically tailored for dealing with mathematical questions, incorporating both textual and visual components in
their problem design to enhance their ability to handle complex mathematical tasks.
**3** **Methodology**
We begin by introducing the construction pipeline of MathScape in Section 3.1. Next, we present the multidimensional
evaluation approach in Section 3.2. In Section 3.3, we detail
the two-step answer evaluation method. Finally, we summarize the dataset statistics in Section 3.4.
**3.1** **Construction of MathScape**
**Data Preparation** The data preparation module consists
of three steps, as shown in Figure 3(a). First, we collected
a large number of mathematics questions from elementary,
junior high, and senior high school exams and homework as
the evaluation sample. We gathered a total of 1,325 image
mathematics questions. Next, the question documents were
converted to PDF format using Pandoc and subsequently
transformed into images for further use.
**Data Annotation** As illustrated in Figure 3(b), the images
are then transformed to closely align with real-world scenarios by capturing photos of printed images and screen displays.
**Data Check and Knowledge Classification** After constructing the dataset, we perform a double-check and
knowledge-based classification to ensure its high quality. As
illustrated in Figure 3(c), we rigorously review the dataset
to ensure that both the textual and graphical inputs are clear
and accurate. Once data quality is verified, we categorize the
data according to knowledge points.
-----
**(b) Human Annotation**
Markdown Print & Take Photo
pandoc
Real Math
Math Images
Images
PDF Conversion Take Photo of the Screen
Math Document Math Images
Image Conversion
Classification Quality Check
Real Math
Images
**(a) Data Preparation** **(c) Check and Knowledge Classification**
Figure 3: MathScape process pipeline.
(a) Proportion by knowledge points (b) Proportion by question type
Figure 5: Proportion Figure
rank among the top performers on major multimodal LLM
leaderboards. This included 11 different types of VLLMs,
with a particular emphasis on analyzing the results and performance of the leading models.
- Closed-source models: GPT4 (OpenAI 2023b), GeminiPro (Reid et al. 2024), Claude-3-Opus, Baichuan-VL
(Yang et al. 2023), Qwen-Max (Bai et al. 2023a), QwnPlus (Bai et al. 2023a), GLM4V.
- Open-source models: Deepseek-VL(Lu et al. 2024),
LLaVA(Liu et al. 2024a), Yi(Young et al. 2024).
**Settings.** We conduct all model inferences in a zero-shot
setting, using the same configuration for each official model.
Instead of the Chain of Thought (CoT) technique, we use a
custom prompt to guide the model in producing the problemsolving process and final answer, as shown in Figure 4. The
settings include a max token limit of 2048, top-k of 5, a temperature of 0.3, and a repetition penalty of 1.05. All experiments are run on NVIDIA H100 GPUs.
**4.2** **Performance of Various Models**
In this section, we present the performance of commonly
used MLLMs on our benchmark. We analyze the results
from the perspectives of Question Types, Knowledge Points,
and Educational Stages:
**Question Types** As shown in Table 1, GPT-4V and GPT4-turbo exhibits the highest accuracy across all question
types, with an average of 34.96%, followed by GPT-4-Turbo
Vision at 33.92%. While Yi-VL-34B and DeepSeek-V2
**System: "You will play the role of a problem-solving assistant skilled**
in solving math problems. Your task is to analyze and solve math
problems based on both textual and visual information. You need to
understand the meaning of the problem presented in the image and
combine the text recognized from the image to solve the problem step
by step."
**Demand: "You need to have a comprehensive understanding of both**
the text and the image, and then answer the question in the text.
**Note: The final output should be in JSON format, with the following**
structure: { "solution": "Explanation of the problem-solving
process...", "answer": "Final answer" }."
_Prompt-Inference_
You need to extract the expressions of the student's answers
for each sub-question.
Student's response: {response}
You need to output the following:
Student's answers: {{Extracted student's answers result:
(1){{Student's answer}}
(2){{Student's answer}}
(3){{Student's answer}}
(4)....}} _Prompt-Extract_
Figure 4: Prompts for inference and extracting answers.
proof questions. This distribution indicates that our dataset
is designed to challenge models with diverse question types,
while still reflecting the real-world emphasis on practical
problem-solving.
Overall, our dataset contains a total of 1,325 images, providing a robust resource for evaluating the mathematical reasoning capabilities of MLLMs.
**4** **Experiments and Analysis**
In this section, we utilize multiple state-of-the-art (SOTA)
models and test their performance on the MathScape benchmark.
**4.1** **Experimental Setups**
**Models.** In our evaluation of multimodal LLMs, we focused on both open-source and closed-source models that
-----
Table 1: Accuracy scores comparison of models on differ**ent question types**
Model Average Choice Solution Proof
Closed-source Models
GPT-4V **34.96** **35.75** **31.72** 28.33
GPT-4-turbo 33.92 29.85 31.58 **56.62**
Claude-3-Opus 28.79 29.3 20.85 50.00
Gemini-Pro 21.37 12.62 16.16 37.50
Baichuan-VL 30.00 26.38 25.83 45.97
Qwen-VL-Max 27.83 23.97 22.17 34.85
Qwen-VL-Plus 15.60 19.46 12.48 35.19
GLM4V 12.26 11.54 7.31 26.28
Open-source Models
Yi-VL-34B **18.36** **19.01** 9.98 33.33
DeepSeek-V2 15.66 12.75 **10.60** **37.69**
LLaVA-1.6-7B 12.35 11.31 6.24 13.43
achieve good performance among open-source models. We
can see the performance of closed-source models achieved
better performance than open-source models.
The table shows that models generally perform better on
proof questions compared to multiple-choice and solution
questions. This suggests that the structured format and clear
information in proof questions make them easier for models
to handle, while solution questions, which require complex,
multi-step reasoning, pose more of a challenge.
**Knowledge Points** Table 2 shows the answer accuracy of
the models in different knowledge points. GPT-4V and GPT4-turbo consistently outperform other models in areas like
algebra, equations and inequalities, functions, and probability and statistics. Most models show balanced performance
across different knowledge areas, but there are exceptions,
such as LLaVA-1.6, which does well in equations and inequalities but struggles with functions.
Overall, closed-source models are more accurate than
open-source ones, with GPT-4V and GPT-4-turbo leading in
many categories.
**Educational Stages** Table 3 presents the performance of
open-source and closed-source models on MathScape at the
elementary, middle, and high school levels. At the elementary and middle levels, the models perform similarly. However, when the difficulty increases to the high school level,
we observe a significant drop in accuracy. Some models
show an extreme decrease in performance between the middle and high school benchmarks. For instance, Gemini-Pro
has an average accuracy of 25.79% at the elementary level,
but this sharply declines to just 10.22% at the high school
level. This suggests that high school-level math poses significant challenges for LLMs.
Overall, our evaluation shows that closed-source models, particularly GPT-4V and GPT-4-turbo, consistently outperform open-source models across various question types,
knowledge points, and educational stages. These models demonstrate superior accuracy, especially in structured
30
25
)20
%
(15
10
Percentage
5
0
1 2 3 4 5
GPT4V Claude-3-Opus Baichuan-VL Qwen-VL-Max
Figure 6: Stability Analysis: For each problem, the model is
tested five times. The numbers 1 to 5 represent the proportion of correct responses.
question types like proof questions and in areas requiring
advanced mathematical reasoning, such as algebra and probability. However, as the difficulty level increases, all models
experience a decline in accuracy, with the most significant
drops occurring between the middle and high school stages.
GLM-4V performs particularly poorly at the high school
level, highlighting the challenges that remain in achieving
consistent performance on difficult math problems.
Figure 7: The variation of accuracy with answer length.
**4.3** **Stability Results and Analysis**
In this subsection, we perform a stability test for GPT4V,
Claude-3-Opus, Baichuan-VL, and Qwen-VL-Max. We selected 300 problems and tested each model five times on
each problem. The number of correct answers across these
attempts was calculated to assess the stability of each model.
As shown in Figure 6, none of the models demonstrate high
stability—only about 25% of the problems were answered
correctly in all five attempts. Therefore, it’s imperative to
focus on enhancing the stability and robustness of math
MLLMs, as consistent performance across repeated trials is
crucial for their practical application in real-world scenarios. This finding also suggests that future research should
explore methods to reduce variability in model outputs, en
-----
Table 2: Accuracy scores comparison of Models on different knowledge points
**Model** **Algebraic** **Geometric** **Equations and Inequality** **Functions** **Probability Statistics**
**Closed-source Models**
GPT-4V **39.05** 27.90 29.73 **34.14** **41.31**
GPT4-turbo 36.28 **29.54** **32.50** 28.43 37.99
Claude-3-Opus 31.78 22.67 20.83 20.58 36.22
Gemini-Pro 21.13 15.50 15.35 9.57 13.33
Baichuan-VL 30.54 25.98 25.83 26.69 23.67
Qwen-VL-Max 28.71 21.86 28.33 20.86 19.09
Qwen-VL-Plus 16.70 17.07 18.67 16.67 11.46
GLM4V 8.94 12.57 5.13 7.32 10.55
**Open-source Models**
Yi-VL-34B **16.78** **15.84** 7.02 9.79 **11.44**
DeepSeek-V2 12.71 14.87 6.19 **10.60** 9.61
LLaVA-1.6-7B 9.76 8.58 **15.79** 3.57 10.77
Table 3: Comparison of Models on different knowledge stages (E: Easy, M: Medium, D: Difficult, Avg: Average Score)
**Model** **Elementary** **Middle** **High**
avg E M D avg E M D avg E
**Closed-source Models**
GPT-4V 36.04 57.58 38.64 10.71 **36.42** **40.38** **34.95** 30.14 **28.08** **33.26** 24.38 **22.57**
GPT4-turbo **37.71** **72.73** **38.79** **18.33** 35.12 37.22 34.51 **30.44** 26.06 28.65 **25.19** 18.83
Claude-3-Opus 28.30 33.33 31.10 10.04 31.04 31.29 33.97 12.22 19.17 24.07 16.41 15.15
Gemini-Pro 25.79 48.48 26.91 11.29 17.20 19.19 16.29 15.07 10.22 12.74 8.90 5.03
Baichuan-VL 29.85 35.00 31.45 18.33 29.96 28.94 32.57 21.38 22.33 27.59 17.42 16.01
Qwen-VL-Max 34.82 42.86 36.65 20.45 24.87 25.70 24.96 20.72 16.95 18.97 15.61 14.92
Qwen-VL-Plus 20.49 40.00 21.23 9.20 19.16 21.11 18.83 13.19 11.00 13.94 9.29 5.83
GLM4V 10.32 33.29 9.62 4.29 13.28 17.07 14.85 12.89 7.64 8.73 11.11 4.08
**Open-source Models**
Yi-VL-34B **14.99** 40.00 **16.13** 3.32 **16.38** **16.31** **17.10** 11.67 **12.14** **11.65** **12.96** **10.58**
DeepSeek-V2 13.74 **42.42** 13.73 2.87 14.93 14.68 14.47 **19.09** 10.18 8.29 12.46 7.99
LLaVA-1.6-7B 9.77 35.21 10.82 **7.12** 10.37 9.79 10.90 9.07 7.57 8.41 6.53 4.54
suring more reliable and trustworthy results.
**4.4** **Answer Length and Accuracy**
**Distribution of Answer Lengths** From Figure 7, we observe distinct patterns in the distribution of answer lengths
across different models. Notably, GPT-4V and Baichuan-VL
tend to generate a larger proportion of shorter answers. As
illustrated in Figure 8, it is evident that shorter but accurate
answers are more likely to achieve higher scores. This trend
highlights the efficiency of models that can deliver concise and precise responses, particularly in scenarios where
brevity is valued.
**Analysis of Answer’s Length** In our evaluation of the
MathScape benchmark, we observed that there is no
straightforward positive correlation between answer length
and accuracy. In fact, as shown in Figure 8, when the length
of the answer increases, the accuracy tends to decrease.
This result demonstrates the robustness of the MathScape
benchmark, ensuring that models cannot simply inflate their
40
35
30
25
20
15
10
375 750 1125 1500 max
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||GPT-4V Qwen-VL-M|ax|
|||||Gemini-Pro Baichuan-VL||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
GPT-4V
Qwen-VL-Max
Gemini-Pro
Baichuan-VL
Answer Length
Figure 8: The variation of accuracy with answer length.
scores by producing longer answers. Such a design effectively prevents any biases in answering strategies, ensuring
that the benchmark and evaluation method accurately reflects a model’s true ability to understand and solve math
-----
Given the figure, it is known that AM∥BN, ∠A=60∘, First, since BD is the bisector of ∠PBN, similarly, we can
point P is a moving point on the ray AM (not conclude that ∠PBD=∠ABD=60°. Then, because
coincident with A), BC and BD bisect ∠ABP and ∠APB=∠ABP+∠ABD, we know that ∠ABP=∠ABC
∠PBN respectively. The ray AM intersects C and D. (since BC bisects ∠ABP), which equals 60°. Therefore,
(There needs to be a reasoning process, but it is not ∠APB=60°+60°=120°. Now, we can calculate ∠ADB.
necessary to write out every step.) Since ∠ABD=60°, and we also know that ∠BAD=60°, in
the isosceles triangle ΔABD,
∠ADB=180°−∠BAD−∠ABD=180°−60°−60°=60∘
Therefore, we can say ∠APB=120°=2×60°=2∠ADB
**(b) GPT4 Response**
1. Find the measure of ∠CBD
2. Prove that ∠APB=2∠ADB **Judge: The proof is complete incorrect.**
3. When point P moves such that ∠ACB=∠ABD, **Difficulty: Recognition of spatial angle**
find the measure of ∠ABC. relationships, spatial deduction and proof.
**(a) Math Problem** **(c) Challenges**
Figure 9: Illustration of Case Study 1. The red sign marks the incorrect part of the answer.
ematical problems, rather than gaining an unfair advantage
through verbose responses.
**5** **Challenges and Future Directions**
As shown in Section 4, none of the models performed well
on our MathScape benchmark. In this section, we discuss the
challenges faced by current MLLMs and suggest potential
future directions for improving math MLLMs.
**5.1** **Challenges**
In this subsection, we explore the main reasons why models provide incorrect answers to image-text mathematical
problems. These errors are mainly due to challenges in understanding and interpreting the information. We can break
down these challenges into the following specific reasons:
**Unable to Retrieve Information from the Image:** This
is one of the most common errors, where models may fail
to extract all the relevant information from the image. For
instance, when interpreting complex geometric patterns, it’s
easy to overlook certain data or conditions, leading to incorrect answers. As shown in Case Study 1 in Figure 9, the
model provided an incorrect proof due to its incomplete understanding of the image.
**Misunderstanding of Graphic Positioning:** This issue
involves the accurate understanding of the spatial layout of
graphics. For instance, in geometry problems, errors can occur if the model fails to correctly recognize the lengths or angles of figures. Such mistakes often stem from a lack of deep
understanding of graphic properties or insufficient ability to
shift perspectives. In Figure 10, Case Study 2, the model incorrectly interprets the distance from point A to 0 as _√2._
**Insufficient Reasoning Ability:** This issue arises from the
limited logical reasoning capabilities of LLMs. Even when
the image information is provided correctly, the LLM may
still produce incorrect responses. As shown in Case Study 3
in Figure 11, the LLM fails to solve the complex problem
correctly and makes errors in the process.
Overall, the challenges for multimodal large models primarily focus on the interpretation of visual information and
the inherent reasoning abilities of the models.
**5.2** **Future Directions for Math MLLMs**
MathScape have introduced several challenges for MLLMs,
as mentioned in section 5.1. In this section, we summarize
future directions for MLLMs.
**Stronger LLMs** As discussed in section 5.1, it is evident
that LLMs have limited mathematical reasoning abilities. To
improve the math problem-solving capabilities of MLLMs,
it is crucial to develop stronger LLMs with enhanced mathematical reasoning skills.
**Better Pattern Recognition** Improving pattern recognition is essential for enhancing the performance of MLLMs,
particularly in tasks involving complex visual information.
Current models often struggle with identifying and interpreting intricate patterns in images, such as geometric configurations, charts, and fine-grained visual details. Future research
should focus on developing models that can more accurately
recognize and differentiate patterns, especially when they
are complex.
**6** **Conclusion**
Recently, MLLMs have emerged as powerful models for answering questions across multiple domains. However, comprehensive benchmarks that reflect real-world scenarios are
needed to evaluate their mathematical performance. In this
paper, we introduce MathScape, a new benchmark designed
to assess the math capabilities of MLLMs using entirely
original, leak-free images. Additionally, we propose a novel
two-step evaluation method specifically for assessing long
answers. MathScape not only challenges existing MLLMs
but also aims to inspire the development of more advanced
math-focused MLLMs.
-----
**References**
Bai, J.; Bai, S.; Chu, Y.; Cui, Z.; Dang, K.; Deng, X.; Fan,
Y.; Ge, W.; Han, Y.; Huang, F.; et al. 2023a. Qwen technical
report. arXiv preprint arXiv:2309.16609.
Bai, J.; Bai, S.; Yang, S.; Wang, S.; Tan, S.; Wang, P.; Lin, J.;
Zhou, C.; and Zhou, J. 2023b. Qwen-vl: A versatile visionlanguage model for understanding, localization, text reading, and beyond.
Bai, T.; Liang, H.; Wan, B.; Yang, L.; Li, B.; Wang, Y.; Cui,
B.; He, C.; Yuan, B.; and Zhang, W. 2024. A Survey of Multimodal Large Language Model from A Data-centric Perspective. arXiv preprint arXiv:2405.16640.
Chen, L.; Li, J.; Dong, X.; Zhang, P.; He, C.; Wang, J.;
Zhao, F.; and Lin, D. 2023. ShareGPT4V: Improving
Large Multi-Modal Models with Better Captions. _CoRR,_
abs/2311.12793.
Chen, Z.; Wu, J.; Wang, W.; Su, W.; Chen, G.; Xing, S.;
Zhong, M.; Zhang, Q.; Zhu, X.; Lu, L.; et al. 2024. Internvl: Scaling up vision foundation models and aligning
for generic visual-linguistic tasks. In Proceedings of the
_IEEE/CVF Conference on Computer Vision and Pattern_
_Recognition, 24185–24198._
Cobbe, K.; Kosaraju, V.; Bavarian, M.; Chen, M.; Jun, H.;
Kaiser, L.; Plappert, M.; Tworek, J.; Hilton, J.; Nakano, R.;
et al. 2021. Training verifiers to solve math word problems.
_arXiv preprint arXiv:2110.14168._
He, Z.; Wu, X.; Zhou, P.; Xuan, R.; Liu, G.; Yang, X.; Zhu,
Q.; and Huang, H. 2024. CMMU: A Benchmark for Chinese Multi-modal Multi-type Question Understanding and
Reasoning. arXiv preprint arXiv:2401.14011.
Hendrycks, D.; Burns, C.; Kadavath, S.; Arora, A.; Basart,
S.; Tang, E.; Song, D.; and Steinhardt, J. 2021. Measuring
mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874._
Jian, Y.; Gao, C.; and Vosoughi, S. 2023. Bootstrapping
Vision-Language Learning with Decoupled Language Pretraining. In Advances in Neural Information Processing
_Systems 36: Annual Conference on Neural Information Pro-_
_cessing Systems 2023, NeurIPS 2023, New Orleans, LA,_
_USA, December 10 - 16, 2023._
Li, B.; Zhang, Y.; Chen, L.; Wang, J.; Yang, J.; and Liu, Z.
2023a. Otter: A Multi-Modal Model with In-Context Instruction Tuning. CoRR, abs/2305.03726.
Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023b. Blip-2:
Bootstrapping language-image pre-training with frozen image encoders and large language models. In International
_conference on machine learning, 19730–19742. PMLR._
Li, J.; Li, D.; Savarese, S.; and Hoi, S. 2023c. Blip-2:
Bootstrapping language-image pre-training with frozen image encoders and large language models. In International
_conference on machine learning, 19730–19742. PMLR._
Li, J.; Li, D.; Xiong, C.; and Hoi, S. C. H. 2022a.
BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation. In In_ternational Conference on Machine Learning, ICML 2022,_
_17-23 July 2022, Baltimore, Maryland, USA, volume 162,_
12888–12900.
Li, L. H.; Zhang, P.; Zhang, H.; Yang, J.; Li, C.; Zhong,
Y.; Wang, L.; Yuan, L.; Zhang, L.; Hwang, J.; Chang, K.;
and Gao, J. 2022b. Grounded Language-Image Pre-training.
In IEEE/CVF Conference on Computer Vision and Pattern
_Recognition, CVPR 2022, New Orleans, LA, USA, June 18-_
_24, 2022, 10955–10965. IEEE._
Liu, H.; Li, C.; Li, Y.; and Lee, Y. J. 2023a. Improved
baselines with visual instruction tuning. _arXiv preprint_
_arXiv:2310.03744._
Liu, H.; Li, C.; Wu, Q.; and Lee, Y. J. 2023b. Visual Instruction Tuning. In Advances in Neural Information Process_ing Systems 36: Annual Conference on Neural Information_
_Processing Systems 2023, NeurIPS 2023, New Orleans, LA,_
_USA, December 10 - 16, 2023._
Liu, H.; Li, C.; Wu, Q.; and Lee, Y. J. 2024a. Visual instruction tuning. Advances in neural information processing
_systems, 36._
Liu, H.; Zheng, Z.; Qiao, Y.; Duan, H.; Fei, Z.; Zhou,
F.; Zhang, W.; Zhang, S.; Lin, D.; and Chen, K. 2024b.
MathBench: Evaluating the Theory and Application Proficiency of LLMs with a Hierarchical Mathematics Benchmark. arXiv preprint arXiv:2405.12209.
Liu, S.; Zeng, Z.; Ren, T.; Li, F.; Zhang, H.; Yang, J.; Li,
C.; Yang, J.; Su, H.; Zhu, J.; and Zhang, L. 2023c. Grounding DINO: Marrying DINO with Grounded Pre-Training for
Open-Set Object Detection. CoRR, abs/2303.05499.
Lu, H.; Liu, W.; Zhang, B.; Wang, B.; Dong, K.; Liu, B.;
Sun, J.; Ren, T.; Li, Z.; Sun, Y.; et al. 2024. Deepseek-vl:
towards real-world vision-language understanding. _arXiv_
_preprint arXiv:2403.05525._
Lu, J.; Gan, R.; Zhang, D.; Wu, X.; Wu, Z.; Sun, R.; Zhang,
J.; Zhang, P.; and Song, Y. 2023a. Lyrics: Boosting Finegrained Language-Vision Alignment and Comprehension
via Semantic-aware Visual Objects. CoRR, abs/2312.05278.
Lu, P.; Bansal, H.; Xia, T.; Liu, J.; Li, C.; Hajishirzi,
H.; Cheng, H.; Chang, K.-W.; Galley, M.; and Gao, J.
2023b. Mathvista: Evaluating mathematical reasoning of
foundation models in visual contexts. _arXiv preprint_
_arXiv:2310.02255._
OpenAI. 2023a. ChatGPT.
OpenAI, R. 2023b. Gpt-4 technical report. arxiv
2303.08774. View in Article, 2(5).
Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.;
Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.;
et al. 2021. Learning transferable visual models from natural language supervision. In International conference on
_machine learning, 8748–8763. PMLR._
Reid, M.; Savinov, N.; Teplyashin, D.; Lepikhin, D.; Lillicrap, T.; Alayrac, J.-b.; Soricut, R.; Lazaridou, A.; Firat, O.;
Schrittwieser, J.; et al. 2024. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context.
_arXiv preprint arXiv:2403.05530._
Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux,
M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.;
Azhar, F.; et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
-----
Wang, K.; Pan, J.; Shi, W.; Lu, Z.; Zhan, M.; and Li, H.
2024a. Measuring multimodal mathematical reasoning with
math-vision dataset. arXiv preprint arXiv:2402.14804.
Wang, W.; Mrini, K.; Yang, L.; Kumar, S.; Tian, Y.; Yan,
X.; and Wang, H. 2024b. Finetuned Multimodal Language
Models Are High-Quality Image-Text Data Filters. CoRR,
abs/2403.02677.
Wu, J.; Gan, W.; Chen, Z.; Wan, S.; and Yu, P. S. 2023. Multimodal large language models: A survey. arXiv preprint
_arXiv:2311.13165._
Xu, L.; Xue, H.; Zhu, L.; and Zhao, K. 2024a. SuperCLUEMath6: Graded Multi-Step Math Reasoning Benchmark for
LLMs in Chinese. arXiv preprint arXiv:2401.11819.
Xu, Y.; Liu, X.; Liu, X.; Hou, Z.; Li, Y.; Zhang, X.; Wang,
Z.; Zeng, A.; Du, Z.; Zhao, W.; et al. 2024b. ChatGLMMath: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline. arXiv preprint
_arXiv:2404.02893._
Yang, A.; Xiao, B.; Wang, B.; Zhang, B.; Bian, C.; Yin, C.;
Lv, C.; Pan, D.; Wang, D.; Yan, D.; et al. 2023. Baichuan
2: Open large-scale language models. _arXiv preprint_
_arXiv:2309.10305._
Ying, H.; Zhang, S.; Li, L.; Zhou, Z.; Shao, Y.; Fei, Z.; Ma,
Y.; Hong, J.; Liu, K.; Wang, Z.; et al. 2024. Internlm-math:
Open math large language models toward verifiable reasoning. arXiv preprint arXiv:2402.06332.
Young, A.; Chen, B.; Li, C.; Huang, C.; Zhang, G.; Zhang,
G.; Li, H.; Zhu, J.; Chen, J.; Chang, J.; et al. 2024.
Yi: Open foundation models by 01. ai. _arXiv preprint_
_arXiv:2403.04652._
Yue, X.; Qu, X.; Zhang, G.; Fu, Y.; Huang, W.; Sun, H.; Su,
Y.; and Chen, W. 2023. Mammoth: Building math generalist
models through hybrid instruction tuning. _arXiv preprint_
_arXiv:2309.05653._
Zhang, H.; Zhang, P.; Hu, X.; Chen, Y.; Li, L. H.; Dai, X.;
Wang, L.; Yuan, L.; Hwang, J.; and Gao, J. 2022. GLIPv2:
Unifying Localization and Vision-Language Understanding.
In Advances in Neural Information Processing Systems 35:
_Annual Conference on Neural Information Processing Sys-_
_tems 2022, NeurIPS 2022, New Orleans, LA, USA, Novem-_
_ber 28 - December 9, 2022._
Zhang, R.; Jiang, D.; Zhang, Y.; Lin, H.; Guo, Z.; Qiu, P.;
Zhou, A.; Lu, P.; Chang, K.-W.; Gao, P.; et al. 2024. Mathverse: Does your multi-modal llm truly see the diagrams in
visual math problems? arXiv preprint arXiv:2403.14624.
Zhao, W. X.; Zhou, K.; Li, J.; Tang, T.; Wang, X.;
Hou, Y.; Min, Y.; Zhang, B.; Zhang, J.; Dong, Z.; et al.
2023. A survey of large language models. arXiv preprint
_arXiv:2303.18223._
-----
**A** **More Case Study**
We present additional case studies. As shown in Figure 10
and Figure 11, the model struggles with complex reasoning
tasks. This highlights the need to develop stronger reasoning
models.
-----
As shown in the figure, using a unit length segment on the First, the problem describes a square with a side length equal
number line as the side of a square, with the point to the unit length representing the number 1 on the number
representing the number 1 as the center of a circle, a line. Let's denote the side length of this square as 1 unit.
semicircle is drawn with a radius equal to the side length Next, the problem states that the point representing the
of the square. The semicircle intersects the number line at number 1 is the center of a circle, and the radius of the arc is
point A. The number represented by point A is ____. equal to the length of the diagonal of the square. We know
that the length of the diagonal of a square is 2 times the side
length, so in this case, the diagonal length is 2 .
Therefore, with 1 as the center and 2 as the radius, the
length of this arc represents the distance on the number line.
Simply put, the arc intersects the number line at point A, and
the distance from point A to the origin (0) on the number line
**(a) Math Problem** is 2 .
So, the number represented by point A is 1 plus 2, which
**Judge: The answer is incorrect.**
means point A represents the number 1+ 2.
**Difficulty: Weak spatial ability and spatial position** **Final Answer: 1+** 2
reasoning skills.
**(c) Challenges** **(b) GPT4 Response**
Figure 10: Illustration of Case Study 2. The red sign marks the incorrect part of the answer.
|A-C|C-D|E-D|F-E|G-F|B-G|
|---|---|---|---|---|---|
|90|70|-60|50|-40|60|
When actually measuring the height of a mountain, Based on the information provided in the image, let's first interpret the data in the table:
you can measure the relative height between adjacent A-C = 90 meters
observable points among several observation points C-D = 70 meters
and then use these relative heights to calculate the E-D = -60 meters
mountain's height. The table below records some of the F-E = 50 meters
measurement data (e.g., A-C=90 shows A is 90 meters G-F = -40 meters
higher than $C$): B-G = 60 meters
Now, let's solve the questions step by step:
**A-C** **C-D** **E-D** **F-E** **G-F** **B-G**
**(1) First, let's compare the heights of observation points C and E. Since E-D is -60 meters, this**
**90** 70 -60 50 -40 60
means point D is 60 meters higher than point E. C-D is 70 meters, meaning point C is 70 meters
1) Based on the data of C-D and E-D, is observation higher than point D. Therefore, the height of point C relative to point E is the 70 meters that C is
point C higher or lower than observation point E? By higher than D plus the 60 meters that D is higher than E, for a total of 130 meters. Thus,
how much? observation point C is 130 meters higher than observation point E.
**(2) We can calculate the height of point A relative to point B by combining the consecutive height**
2) How much higher is observation point A compared differences:
to observation point B? A-C = 90 meters
C-B = (C-D) + (D-E) + (E-F) + (F-G) + (B-G)
3) How much higher is the highest observation point = 70 + 60 + 50 + 40 + 60
compared to the lowest observation point? = 280 meters
So, the height of point A relative to point B is A-C + C-B = 90 + 280 = 370 meters.
**(3) To find the highest and lowest observation points, we can observe the height differences relative**
**(a) Math Problem** to each other. From the data provided, the highest observation point is point A, and the lowest
observation point is point E (as no point is lower than E). From the previous calculations, we already
**(1) 130 meters;** know that point A is 130 meters higher than point E.
**(2) 150 meters;** Since point A is 90 meters higher than point C, and point C is 130 meters higher than point E, we
**(3) 220 meters** have:
**Challenges:** The highest point A is 220 meters higher than the lowest point E:
1.The reasoning chain is too long; involves six steps of A-E = A-C + C-E = 90 + 130 = 220 meters.
reasoning. This concludes the solution process. Here are the answers to each question:
2.Requires accurate memory ability. **(1) The height difference by which observation point C is higher than observation point E: 130**
**Correct & Incorrect Answers:** **meters**
Correct answers are marked as green. **(2) The height of observation point A relative to observation point B: 370 meters**
Incorrect answers are marked as red. **(3) The height difference between the highest and the lowest observation points: 220 meters**
**(b) Correct Answer & Challenges** **(c) GPT4 Response**
Figure 11: Illustration of Case Study 3. The red sign marks the incorrect part of the answer.
-----
**B** **Prompt for Scoring Answers**
We summarize the prompt for scoring answers in Figure 12.
-----
Task Description: Evaluate whether the student's answer to the given math problem is correct.
Input:
1. Problem Description: [Detailed description of the problem, including necessary mathematical
formulas and conditions.]{question},
2. Reference Answer: [Detailed explanation of the correct answer, including the calculation process
and result.]{answer},
3. Student's Answer: [The student's provided answer, including the calculation process and
result.]{response},
Requirements:
- Carefully compare the student's answer with the reference answer.
- Analyze the correctness of the student's answer, including the calculation process and the final result.
- If the student's answer is incorrect, identify the error and briefly explain the reason for the mistake.
- Provide a concise evaluation conclusion, clearly stating whether the student's answer is correct.
Example:
Problem Description: Calculate the area of a triangle with a base of 6 cm and a height of 3 cm.
Reference Answer: (1) Area = 0.5 * base * height = 0.5 * 6 cm * 3 cm = 9 cm².
Student's Answer: (1) Area = 6 cm * 3 cm = 18 cm².
Evaluation:
(1) False, explanation as follows:
- The student's calculation process ignored the 1/2 coefficient in the area formula.
- The result is incorrect; the correct calculation should yield 9 cm², not 18 cm².
- Conclusion: The student's answer is incorrect.
Based on the above task description and requirements, compare the reference answer and the
student's answer in order. Carefully consider whether they are consistent.
2. If the student's answer is correct, output True; otherwise, output False and provide an evaluation
conclusion.
You need to output:
Only the True or False for each question, example: Judgement result: (1) True, (2) False, (3) True
Explanation as follows: (1)... (2)... (3)...
Figure 12: Prompt used for scoring answers.
-----
**C** **Visualization of MathScape**
**C.1** **Visualization of Performance of Different**
**Models.**
We include additional math samples in MathScape, as illustrated in Figure 13.
**D** **Reproducibility Checklist**
**Question1: Includes a conceptual outline and/or pseu-**
docode description of AI methods introduced?
Answer: Yes.
**Question2: Clearly delineates statements that are opinions,**
hypothesis, and speculation from objective facts and results?
Answer: Yes.
**Question3: Provides well marked pedagogical references**
for less-familiare readers to gain background necessary to
replicate the paper?
Answer: Yes.
**Question4: Does this paper make theoretical contributions?**
Answer: Yes.
**Question5: All assumptions and restrictions are stated**
clearly and formally?
Answer: Yes.
**Question6: All novel claims are stated formally (e.g., in**
theorem statements)?
Answer: Yes.
**Question7: Proofs of all novel claims are included?**
Answer: Yes.
**Question8: Proof sketches or intuitions are given for**
complex and/or novel results?
Answer: Yes.
**Question9: Appropriate citations to theoretical tools used**
are given?
Answer: Yes.
**Question10: All theoretical claims are demonstrated empir-**
ically to hold?
Answer: Yes.
**Question11: All experimental code used to eliminate or**
disprove claims is included?
Answer: Yes.
**Question12: Does this paper rely on one or more datasets?**
Answer: Yes.
**Question13: A motivation is given for why the experiments**
are conducted on the selected datasets?
Answer: Yes.
**Question14: All novel datasets introduced in this paper are**
included in a data appendix?
Answer: Yes.
**Question15: All novel datasets introduced in this paper will**
be made publicly available upon publication of the paper
with a license that allows free usage for research purposes?
Answer: Yes.
**Question16: All datasets drawn from the existing literature**
(potentially including authors’ own previously published
work) are accompanied by appropriate citations?
Answer: Yes.
**Question17: All datasets drawn from the existing literature**
(potentially including authors’ own previously published
work) are publicly available?
Answer: Yes.
**Question18: All datasets that are not publicly available are**
described in detail, with explanation why publicly available
alternatives are not scientifically satisficing?
Answer: Yes.
**Question19: Does this paper include computational experi-**
ments?
Answer: Yes.
**Question20: Any code required for pre-processing data is**
included in the appendix?
Answer: Yes.
**Question21: All source code required for conducting and**
analyzing the experiments is included in a code appendix?
Answer: Yes.
**Question22: All source code required for conducting and**
analyzing the experiments will be made publicly available
upon publication of the paper with a license that allows free
usage for research purposes?
Answer: Yes.
**Question23: All source code implementing new methods**
have comments detailing the implementation, with references to the paper where each step comes from?
Answer: Yes.
**Question24: If an algorithm depends on randomness, then**
the method used for setting seeds is described in a way
sufficient to allow replication of results?
Answer: Yes.
**Question25: This paper specifies the computing infrastruc-**
ture used for running experiments (hardware and software),
including GPU/CPU models; amount of memory; operating
system; names and versions of relevant software libraries
and frameworks?
Answer: Yes.
**Question26: This paper formally describes evaluation**
metrics used and explains the motivation for choosing these
metrics?
Answer: Yes.
**Question27: This paper states the number of algorithm runs**
used to compute each reported result?
Answer: Yes.
**Question28: Analysis of experiments goes beyond single-**
dimensional summaries of performance (e.g., average;
median) to include measures of variation, confidence, or
other distributional information?
Answer: Yes.
**Question29: The significance of any improvement or de-**
crease in performance is judged using appropriate statistical
tests (e.g., Wilcoxon signed-rank)?
Answer: Yes.
**Question30: This paper lists all final (hyper-)parameters**
used for each model/algorithm in the paper’s experiments?
Answer: Yes.
**Question31: This paper states the number and range of**
values tried per (hyper-) parameter during development of
the paper, along with the criterion used for selecting the
final parameter setting?
Answer: Yes.
-----
There are 24 investors who want to invest in a The height of a mountain can be measured by
certain place, and their ages are numbered from measuring the relative heights of each of the two Given that AM || BN, ∠A =60°, point P is a
small to large as 1-24 as shown in the figure. adjacent visual observation points in several moving point on the ray AM (not coincident with
Then, 6 investors are selected by systematic observation points, and then using these relative A), BC and BD bisects· ∠ABP and ∠PBN
sampling method and invited to visit the site. heights to calculate the height of the mountain. respectively, and the intersection ray AM is at C
Among them, the number of investors who are and D. (To have the reasoning process, do not need
The table below is a partial record of one
not more than 55 years old is_____. to write the reason for each step)
measurement.
Try to prove: ∠APB=2∠ADB
A. 1 B. 2 C. 3 D. 4 (1) According to C-D and E-D data, is the
comparison observation point C higher or lower
than the relative observation point E?
Point M (x, y) is an internal dynamic point Figure 1 shows a rectangle 2m in length and
within the shadow area, which of the **2n in width, which is divided into four small**
Calculate the surface area and volume of
option is true ? rectangles with scissors along the dotted line
the following cube.
A. C. in the figure, and then assembled into a
B. D. square according to Figure 2.
You believe that the length of the side of the Calculate the surface area and
shaded square in Figure 2 is equal to volume of the following cube.
________.
Figure 1 Figure 2
Figure 13: Math problem samples in MathScape.
-----
| [
"Minxuan, Zhou",
"Hao, Liang",
"Tianpeng, Li",
"Yan, Zhang",
"Zhiyu, Wu",
"Mingan, Lin",
"Linzhuang, Sun",
"Yaqi, Zhou",
"Wentao, Zhang",
"Bin, Cui",
"Xiaoqin, Huang",
"Yicong, Chen",
"Yujing, Qiao",
"Weipeng, Chen",
"Zenan, Zhou"
] | 2024-08-14T00:00:00 | null | false | 1 | 0 | null | https://arxiv.org/abs/2408.07543v3 | https://arxiv.org/abs/2408.07543 | https://www.semanticscholar.org/paper/0501529800d12399d0562e78a053401b51f2d959 |
MuMath-Code: Combining Tool-Use Large Language Models with Multi-perspective Data Augmentation for Mathematical Reasoning | The tool-use Large Language Models (LLMs) that integrate with external Python interpreters have significantly enhanced mathematical reasoning capabilities for open-source LLMs, while tool-free methods chose another track: augmenting math reasoning data. However, a great method to integrate the above two research paths and combine their advantages remains to be explored. In this work, we firstly include new math questions via multi-perspective data augmenting methods and then synthesize code-nested solutions to them. The open LLMs (i.e., Llama-2) are finetuned on the augmented dataset to get the resulting models, MuMath-Code ($\mu$-Math-Code). During the inference phase, our MuMath-Code generates code and interacts with the external python interpreter to get the execution results. Therefore, MuMath-Code leverages the advantages of both the external tool and data augmentation. To fully leverage the advantages of our augmented data, we propose a two-stage training strategy: In Stage-1, we finetune Llama-2 on pure CoT data to get an intermediate model, which then is trained on the code-nested data in Stage-2 to get the resulting MuMath-Code. Our MuMath-Code-7B achieves 83.8 on GSM8K and 52.4 on MATH, while MuMath-Code-70B model achieves new state-of-the-art performance among open methods -- achieving 90.7% on GSM8K and 55.1% on MATH. Extensive experiments validate the combination of tool use and data augmentation, as well as our two-stage training strategy. We release the proposed dataset along with the associated code for public use. | This work firstly includes new math questions via multi-perspective data augmenting methods and then synthesize code-nested solutions to them and proposes a two-stage training strategy, which leverages the advantages of both the external tool and data augmentation. | ## MuMath-Code: Combining Tool-Use Large Language Models with Multi-perspective Data Augmentation for Mathematical Reasoning
**Shuo Yin[12][†§], Weihao You[1][†], Zhilong Ji[1][*], Guoqiang Zhong[2], Jinfeng Bai[1]**
1Tomorrow Advancing Life
2College of Computer Science and Technology, Ocean University of China
[email protected], [email protected],
[email protected], [email protected], [email protected]
**Abstract**
The tool-use Large Language Models (LLMs)
that integrate with external Python interpreters
have significantly enhanced mathematical reasoning capabilities for open-source LLMs,
while tool-free methods chose another track:
augmenting math reasoning data. However,
a great method to integrate the above two research paths and combine their advantages
remains to be explored. In this work, we
firstly include new math questions via multiperspective data augmenting methods and then
synthesize code-nested solutions to them. The
open LLMs (i.e., Llama-2) are finetuned on
the augmented dataset to get the resulting models, MuMath-Code (µ-Math-Code). During
the inference phase, our MuMath-Code generates code and interacts with the external python
interpreter to get the execution results. Therefore, MuMath-Code leverages the advantages
of both the external tool and data augmentation. To fully leverage the advantages of our
augmented data, we propose a two-stage training strategy: In Stage-1, we finetune Llama2 on pure CoT data to get an intermediate
model, which then is trained on the code-nested
data in Stage-2 to get the resulting MuMathCode. Our MuMath-Code-7B achieves 83.8 on
GSM8K and 52.4 on MATH, while MuMathCode-70B model achieves new state-of-the-art
performance among open methods—achieving
90.7% on GSM8K and 55.1% on MATH. Extensive experiments validate the combination
of tool use and data augmentation, as well as
our two-stage training strategy. We release the
proposed dataset along with the associated code
for public use.
**GSM8K**
100
MAmmoTH MathCoder ToRA MuMath-Code
90.7
90
83.8 84.6
80
70
**Test Accuracy(%)**
60
50
7b 13b 70b
**MATH**
70
MAmmoTH MathCoder ToRA MuMath-Code
60
55.1
52.4 53.1
50
40
30
**Test Accuracy(%)** 20
10
0
7b 13b 70b
Figure 1: The comparison between our MuMath-Code
and other state-of-the-art tool-use LLMs. MuMathCode exhibits a substantial improvement in performance on both GSM8K (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021a), relative to the previous approaches.
Brown et al., 2020; Raffel et al., 2023) especially the proprietary ones such as GPT-4 (OpenAI, 2023a) and Claud-3 (Anthropic, 2024) have
demonstrated superiority in a variety of tasks, e.g.,
text classification (Wang et al., 2018; Devlin et al.,
2019; Min et al., 2022; Jiang et al., 2023b), automated coding (Chen et al., 2021; Luo et al., 2023b),
instructions following (Longpre et al., 2023), and
math problem solving (Chowdhery et al., 2022;
Lewkowycz et al., 2022; Anil et al., 2023; Fu et al.,
2023a). Among these tasks, the capability to han
**1** **Introduction**
In Natural Language Processing (NLP), Large
Language Models (LLMs) (Radford et al., 2019;
†Equal contribution.
§Work done while the author was interning at TAL.
*Corresponding author.
-----
dle math problems stands as a typical and critical criterion for the evaluation of different LLMs.
However, a significant performance disparity is observed between open-source LLMs, for instance,
LLaMA (Touvron et al., 2023a,b), and their proprietary counterparts, when it comes to mathematical
reasoning ability.
In recent years, many scholarly publications
have been directed towards improving the mathematical proficiency of LLMs, which can be categorized into two distinct research trajectories: those
that purely rely on natural language reasoning and
those that incorporate external tools. The former
methods are tool-free, mainly depends on data augmentation to enhance the models’ mathematical
reasoning capability, while the second trajectory
(namely tool-use LLMs) are often coupled with
external Python interpreters. From the perspective
of knowledge distillation (Huang et al., 2022; Li
et al., 2022; Magister et al., 2023; Ho et al., 2023;
Fu et al., 2023b; Shridhar et al., 2023), both mainstream approaches transfer math reasoning abilities
from the powerful teacher models (for instance,
GPT-4) to the inferior open foundation models.
The tool-free methods synthesize a large number
of new math problems and corresponding solutions,
taking the original training math QA pairs as the initial data seeds. Scaling law theoretically provides
the basis for the ongoing improvement of LLMs’
performance by constantly incorporating new training data. Representative approaches are RFT (Yuan
et al., 2023), MetaMath (Yu et al., 2023), WizardMath (Luo et al., 2023a), MuggleMath (Li et al.,
2023), MuMath (You et al., 2024), etc. As for
the second trajectory, code executors substantially
supplant LLMs in particularly challenging computational and logical tasks, thereby alleviating
the problem-solving burden on them. This tooluse category is exemplified by PAL (Gao et al.,
2023), PoT (Chen et al., 2023), MAmmoTH (Yue
et al., 2023), ToRA (Gou et al., 2023) and MathCoder (Wang et al., 2023).
Although the aforementioned research paths
have been individually successful, to date, few
methods have been developed that amalgamate
their respective advantages. In this paper, we propose a novel method that integrates tool usage with
data augmentation to synthesize a large amount
of multi-perspective mathematical questions and
solutions (we employ the augmenting methods introduced in a previous work MuMath (You et al.,
2024)). Specifically, we utilize proprietary LLMs
(like GPT-4) to generate Python code while synthesizing new solutions to math problems, and then
fine-tune the open-source models (e.g., LLaMA)
on the augmented dataset. The resulting model,
**MuMath-Code, is thus equipped with the ability**
to write code for math problem solving. During
the inference phase, our MuMath-Code can generates both CoT (Wei et al., 2022) reasoning texts
and Python code blocks. These code blocks are
then extracted and executed by an external Python
interpreter, and the execution results are returned
to MuMath-Code for subsequent rounds of CoT
reasoning or code generation until the final result
is obtained or the maximum number of execution
rounds is reached.
The multi-perspective mathematical question
set comprises questions augmented via rephrasing (Yu et al., 2023), alteration (Li et al., 2023;
You et al., 2024), FOBAR (Jiang et al., 2023a),
BF-Trans (You et al., 2024), besides those from
the original training sets. Regarding the solutions
nested with Python code, we leverage a general
pattern like the ones used in ToRA (Gou et al.,
2023) and MathCoder (Wang et al., 2023): CoTPoT interleaving. However, we propose prefix CoT,
code debugging and pseudo-answer guidance filtering to improve the consistency and quality of our
augmented solutions. The prefix CoT is a thoughtful analysis in pure natural language before code
generation, making the LLMs consider this analysis while generating all the subsequent content,
which thus are helpful for the models to learn the
whole solution. Besides, we prompt GPT-4 to debug and correct the inexecutable code when requesting the solutions, and we keep the faulty code
since this process of verification and correction can
help boost the models’ coding proficiency. Furthermore, for those synthesized questions via alteration,
which lack ground truth answers as filtering guidance, we choose the majority-voting answers as
the pseudo-answers. This process can increase the
correctness of the generated solutions and thus improve the data quality generally. We name the proposed dataset as MuMath-Code-Data and denote
it as Dµ-code.
Moreover, previous tool-use LLMs for math are
derived by directly finetuning on code-nested data,
which thus fail to fully harness the intrinsic natural language reasoning capability of the LLMs
themselves. Different from the other tool-use methods, we design a two-stage training strategy to
better combine the advantages of data augmenta
-----
tion and external code execution. The first stage is
to enhance the models’ pure language mathematical reasoning, where the largest (751K) dataset
proposed in MuMath (here called MuMath-Data
and denoted as _µ) is utilized to finetune LLaMA,_
_D_
and get an intermediate model, MuMath. In the
second stage, we continue finetuning MuMath on
MuMath-Code-Data to equip the model with the
ability to write code for solving math problems.
The resulting model, MuMath-Code, is thus can
be prompted to leverage the Python interpreter to
execute its generated code for securing the desirable outputs at inference time.
Our contributions are summarized as follows:
- We construct a multi-perspective augmentation dataset with code-nested solutions for
math problem solving, called MuMath-CodeData.
- We design a two-stage training strategy to
equip the open LLMs with pure language reasoning and math related code generation capabilities, respectively.
- The obtained model, MuMath-Code, achieves
new state-of-the-art performance among open
LLMs across the in-domain math reasoning
datasets as well as the out-of-domain ones.
MuMath-Code-7B have 83.8 on GSM8K and
52.4 on MATH, while MuMath-Code-70B has
achieved 90.7% on GSM8K and 55.1% on
MATH.
**2** **Related Work**
**2.1** **Tool-Free LLMs for Math**
Rejection Sampling-based Fine-Tuning (RFT, Yuan
et al., 2023) only augments the solutions via rejection sampling to collect a variety of different
reasoning paths. Since RFT does not introduce
new math questions, the diversity of the augmented
dataset is quite low, which limits the performance
improvement of the finetuned models. With the aim
of incorporating a broader spectrum of questions,
MetaMath (Yu et al., 2023) employs rephrasing,
Self-Verification (SV, Weng et al., 2023) and FOBAR (Jiang et al., 2023a) to generate new questions.
Ideally speaking, like the original questions, there
are also ground truth answers for filtering solutions
to these augmented questions. To bring in more diverse data, WizardMath (Xu et al., 2023; Luo et al.,
2023a) and MuggleMath (Li et al., 2023) choose
to create totally new questions via evolution or directional modification (changing numbers, adding
conditions, increasing complexity, etc.) based on
the seed questions. These altered questions have
no ground truth answers, thus lacking a criterion to
filter their corresponding synthesized solutions.
Furthermore, MuMath (You et al., 2024) leverages some of the aforementioned methods, and
additionally proposes BF-Trans and expression replacement (etc.) to perform comprehensive augmentation, thus constructing a multi-perspective
math question set with much greater diversity. For
improving data quality, majority sampling serves
as the filtering rule for the synthesized solutions
to those new questions without deterministically
known answers. Instead of solution filtering, a contemporary work, Xwin-Math (Li et al., 2024), employs verification with solution requesting during
question synthesis, thereby improving the solvability of the questions and the correctness of the answers. Since there is no restriction on the direction
of question modification, Xwin-Math theoretically
offers a wider variety of diverse synthesized data.
Balancing the efficacy and the ease of replication,
in this paper the proposed MuMath-Code opts to
employ the question augmentation from MuMath,
although it is orthogonal to any other augmentation
methods.
Nevertheless, as probabilistic models, LLMs inherently have limitations in logical reasoning and
numerical computation. Thus, to improve the accuracy of mathematical problem-solving while relying solely on the capabilities of LLMs necessitates
the utilization of a substantially larger dataset compared to tool-use methods.
**2.2** **Tool-Use LLMs for Math**
Another research trajectory highlights the synergy
between LLMs and external tools. Pioneering efforts along this include the Program-aided Language model (PAL, Gao et al., 2023) and Program of Thought (PoT, Chen et al., 2023). Moreover, MAmmoTH (Yue et al., 2023) integrates both
CoT and PoT in a coarse-grained fashion (each
sample corresponds to only one of these two possible solution types), enabling flexible inference
where the finetuned models may adopt different
methods for different questions. Different from
MAmmoTH, ToRA (Gou et al., 2023) interleaves
python code blocks and natural language reasoning parts over multiple turns for a same solution,
which offers a more flexible combination of CoT
and PoT. However, neither MAmmoTH nor ToRA
employs query augmentation, thereby narrowing
-----
the range of math questions, which in effect, limits
the problem-solving capabilities that can be acquired. Wang et al. propose a contemporaneous
work with ToRA, MathCoder (Wang et al., 2023),
where each solution is also organized in an interleaved manner. Besides, they introduce interpolation problems to mitigate the disparity in difficulty
level between GSM8K (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021b). Hence, like our
MuMath-Code, MathCoder is also an amalgamation of tool usage and math question augmentation,
although the new questions it introduces are comparatively narrow in scope and limited in diversity.
Similar to ToRA and MathCoder, we also construct such solutions that intertwine Python code
with pure language reasoning text to adaptably
combine LLMs with external code executing tools.
However, we propose prefix CoT, code debugging, and pseudo-answer guidance filtering to further enrich the solutions and improve their correctness. Additionally, different from MathCoder,
the question augmentation we utilize are multiperspective, thus offering greater diversity and exposing the model to a broader scope of novel questions, thereby significantly enhancing the model’s
generalization capabilities.
**3** **Preliminaries**
We employ the augmented questions from MuMath (You et al., 2024) and synthesize code-nested
solutions to them. To help the models better learn
such solutions with multi-turn code generation,
code execution and pure natural language reasoning, we propose prefix CoT, code debugging, and
pseudo-answer guidance filtering to augment the
quality of the synthetic data, as well as a two-stage
training strategy. Figure 2 delineates the overall
pipeline.
**3.1** **MuMath Augmented Questions**
The original questions from the training
sets of GSM8K (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021a) are taken as
the seed question set Qoriginal. The question
augmenting methods employed in MuMath are
conducted on this seed set, which are briefly
concluded as follows:
**(1) Rephrasing** Rewrite a text while keeping the
original meaning unchanged. Based on the fact that
a rephrased question holds the same meaning as
the original one, the final answer of it should also
be the same. We denote the rephrased question set
as Qrephrase.
**(2) Question Alteration** There are five manners
to alter the original questions, like changing numbers and adding more conditions, concluded in
MuggleMath (Li et al., 2023). The resultant question set created via alteration is referred to as
_Qalter = Qalter1 ∪_ _Qalter2 ∪_ _Qalter3 ∪_ _Qalter4 ∪_
_Qalter5. Besides, Expression Replacement, pro-_
posed in MuMath, firstly get the expressions of the
solution to an original question, then change the
calculation operators within them. Based on the
changed expressions, a new question is asked to
generate. Qreplace represents the question set produced by this augmentation technique. Note that
_Qalter and Qreplace correspond no definitely cor-_
rect answers due to modifications in the questions’
intrinsic meanings.
**(3) FOBAR** Following Jiang et al. (2023a), we
mask a certain condition in an initial question by
substituting it with “X", and meanwhile give the
answer to the original question as a new condition,
thereby creating a reverse question that seeks to
determine the value of the unknown X. Qfobar is
utilized to mark the FOBAR question set.
**(4) BF-Trans** Backward-Forward Transformation (BF-Trans), proposed in MuMath, aims to
construct such backward questions that can be answered through direct reasoning, bypassing the necessity of solving equations to find the unknown
variables (thus resemble the data sampled from
the original distribution). For a certain questionanswer pair, BF-Trans firstly utilize FOBAR to
transform the original question into a backward
one; secondly, we rephrase the FOBAR question
into a new form where the masked value is requested directly instead of employing an unknown
variable X, resulting in a “secondary forward” question which we called BF-Trans question. The set
of these BF-Trans questions is marked as Qbf .
To sum up, all the 10 aforementioned subsets
(5 in Qalter) constitute the resulting question set
= Qoriginal _Qrephrase_ _Qalter_ _Qreplace_
_Q_ _∪_ _∪_ _∪_ _∪_
_Qfobar_ _Qbf_ . Based on, we generate 2 datasets
_∪_ _Q_
called MuMath-Data and MuMath-Code-Data,
emphasizing pure natural language mathematical
reasoning and tool interaction via code generation,
respectively.
-----
Stage 1 Stage 2 **Question**
What is the smallest whole number…
**Solution**
Natural Language **Psuedo-** Code Nested and **[Prefix CoT]**
Reasoning Data **Answer** Tool Interaction Firstly we need to find …
Data
**[Python Code]**
import sympy
def solve( ):
Pretrained
MuMath MuMath MuMath-Code x = 1 Prompt to debug
Model …
print(…)
**Error**
**[Output]**
Final Answer **Success**
Figure 2: Illustration of our proposed method. The foundation model is first trained through an initial stage, resulting
in an intermediary model that possesses more powerful math reasoning capability. This intermediary model is then
further trained on the proposed dataset to learn code generation and tool interaction, leading to the final model,
MuMath-Code.
**3.2** **MuMath-Data**
MuMath-Data (denoted as _µ) is just the largest_
_D_
dataset from MuMath, which contains about 750K
samples with pure CoT reasoning solutions to questions in Q.
**Majority Sampling** As is introduced in the paper
of MuMath, for Qalter and Qreplace whose each
question has no reference answer, majority sampling is utilized to filter all the randomly generated solutions and only those solutions with the
majority answers are kept. In other words, each
majority answer serves as a pseudo-answer to the
corresponding question.
**4** **Methodology**
**4.1** **MuMath-Code-Data**
To facilitate the interaction with the python interpreter, we synthesize the code-nested solutions for
the models to learn, each consisting of multi-turn
code generation, code execution and pure natural
language reasoning.
Specifically, for each question from Q, we
prompt proprietary LLMs to request solutions each
with at least one block of code, which is then extracted and passed to the external interpreter for
execution. Every execution result is appended to
the preceding content, right after the corresponding
code block. If the code execution fails, we append
a prompt to actively debug, using all the previous
content as a whole new prompt to request the corrected code, which we then extract and execute
again. By iterating this process multiple times, we
obtain a reasoning path comprising code, execution
outcomes and natural language analysis. This reasoning path is similar to that of MathCoder (Wang
et al., 2023) and ToRA (Gou et al., 2023), but the
differences lie in the use of our proposed prefix
CoT, code debugging, and pseudo-answer guidance
filtering, which will be elaborated on in this section.
We marked MuMath-Data-Code as Dµ-code.
**Prefix CoT** We have observed that before generating code, a thorough pure natural language
analysis is helpful for the models’ performance.
Therefore, we deliberately add a thoughtful CoT
reasoning before code writing. The request prompt
used is “Analyze the question; list some knowledge
_points related to the question and beneficial for_
_problem solving.”._
**Code Debugging** Several research studies have
shown that the use of error correction and verification data can improve the mathematical reasoning capabilities of LLMs. Therefore, we introduce an error correction process for our augmented
dataset. Specifically, while constructing a solution,
if the generated code fails to execute, we append a
prompt “The code above has encountered a prob_lem. Now point out its mistakes and then correct_
_them.” for GPT-4 to debug the code and write new_
code until the executable code is obtained, or the
maximum number of requests is reached. The failing code and error information are kept to equip
the finetuned models with debugging ability, and
-----
thus enhance their coding proficiency for solving
math problems.
**Pseudo-Answer** **Guidance** **Filtering** In
MuMath-Data, we employ majority sampling
to filter solutions. This provides us with
pseudo-answers for the augmented questions
corresponding no reference answers, which can
also be employed for MuMath-Code-Data to select
solutions. This approach improve the correctness
of the synthesized solutions, thereby leading
to an enhancement in the overall quality of the
augmented data.
To sum up, we mark the i-th CoT (pure natural language reasoning) part as ci; the i-th python
code part is marked as pi, which always begins
with ```python and ends with ```; the i-th code
execution output is denoted as oi, beginning with
```output and ending with ```. To formalize, one
resulting solution s is defined as follows:
_[n][−][1]_
_s =_ _cipioi_ _cn_ (1)
_i=1_
M
= c1p1o1c2p2o2...cn−1pn−1on−1cn,
where stands for the concatenation of all the
turns, and n is the number of CoT parts. See Appendix A[L] for an example.
**4.2** **Two-Stage Training**
**Stage-1** The first stage training is on MuMathData, where the models concentrate on learning
the capability of pure CoT math reasoning. The
learning target is as follows:
_L2 =_
_−_ Eq,s∼Dµ-code
_i−1_
_cjpjoj; θ_
_j=1_
M []
(3)
log P _cipi_ _q,_
_|_
_i=1_
X
where pn = ∅. The training process at Stage-2 is
consistent with the inference, so we do not need
to consider the issue of catastrophic forgetting (regarding the natural language reasoning in Stage-1).
At inference time, after being given a mathematical
problem, the finetuned model needs to generate
code for problem solving, and then an external interpreter executes the code and returns the result
for the model to continue generating. Therefore,
Stage-2 training simulates the above inference process by masking out the losses of the execution
outputs.
**5** **Experiments**
**5.1** **Experimental Setup**
**Datasets** Our seed datasets for synthesis are
the training sets of two popular math reasoning
benchmarks: GSM8K (Cobbe et al., 2021) and
MATH (Hendrycks et al., 2021a). GSM8K contains elementary school math problems, comprising
7,473 training instances and 1,319 test instances;
while MATH encompasses math competition problems at the high school level with 7,500 training
samples and 5,000 for test.
We take the MuMath (You et al., 2024) dataset
(750K) as our Dµ for Stage-1 training, and the
MuMath augmented question set Q are utilized to
construct Dµ-code for Stage-2; in Q, we request
15 solutions for each question that originates from
GSM8K and 30 for MATH-related ones, and then
perform filtering to get 30K samples for each question subset, making 600K in total.
**Implementation Details** Our study utilizes
LLaMA-2 (7B, 13B and 70B) (Touvron et al.,
2023b) and CodeLlama (7B, 13B, 34B, and
70B) (Rozière et al., 2023) as the foundation models for full-parameter finetuning, corresponding
to MuMath-Code-L and MuMath-Code-CL as the
resulting models. We employ AdamW as the optimizer and a cosine learning rate scheduler with
a 0.03 warmup ratio. Across all the models and
both stages, we train 3 epochs with a 128 global
batch size. All the models except for LLaMA-70B
_l_
1 = Eq,s _µ_ log P _xt_ _q, x<t; θ_ _, (2)_
_L_ _−_ _∼D_ _|_
_t=1_
X []
where the solution s = (x1, x2, ..., xl) contains l
tokens, and θ is the parameter of MuMath-Code.
This training stage endow the models with a
fairly strong mathematical reasoning capability,
which can be seen as an preliminary task for the
second stage learning.
**Stage-2** The second stage training is on MuMathCode-Data, where the models concentrate on PoTCoT interleaved data to learn how to interact with
an external tool (i.e., the Python interpreter). We
mask the loss of the outputs from the code execution, which should not be learned by the models.
The learning target is:
-----
and CodeLlama-70B are trained using the Deepspeed framework, while those two 70B models are
trained using Megatron for the sake of speed. The
hardware we use are NVIDIA H800 GPUs.
**5.2** **Comparison Results**
As shown in Table 1, the comparison experiment of
our models with the current state-of-the-art demonstrates that our approach consistently achieves superior performance across all scales of open-source
models on all the datasets. Notably, our MuMathCode-L 7B model has attained a test accuracy of
83.8 on the GSM8K, and MuMath-Code-CL 7B
has reached a score of 52.4 on MATH. These outcomes surpass many 70B open-source baselines
and even some proprietary LLMs. Additionally,
our MuMath-Code-CL 34B and 70B achieve 55.0+
on MATH, two impressive results considering that
they are accomplished by leveraging data augmentation techniques based on the original training set
without the incorporation of extensive additional
mathematical corpora for pre-training.
There are some noteworthy findings from the experimental statistics presented in the table, such
as the performance of MuMath-Code-CL 13B
on MATH, registering at 53.1, which is only
marginally higher than that of MuMath-Code-CL
7B, which stands at 52.4. Moreover, the MuMathCode-CL 34B’s performance on MATH, scoring at
55.0, is very close to that of the MuMath-Code-CL
70B, which records a score of 55.1. We speculate that this may be attributed to the phenomenon
where, beyond a certain threshold of data volume,
the advantages conferred by increased model size
may be diminished or even offset by the benefits
derived from the expanded dataset. Additionally,
variations in the training frameworks may also contribute to the observed discrepancy between the
performances of MuMath-Code-CL 34B and 70B.
**5.3** **Effectiveness of the Two-Stage Training**
**Strategy**
MuMath-Code is derived from a two-stage training process that enhances the model’s pure natural
language reasoning capabilities and the ability to
generate code and interact with external tools. In
this section, we validate the efficacy of this bifurcated training strategy. Unless otherwise specified,
all ablation experiments presented in this paper
are conducted on 7B models, for the sake of time
efficiency. We have designed a comparative evaluation of model performances for two-stage and
one-stage training strategies. The two-stage training referred to here is as described in Section 4.2,
which involves continuing training from the checkpoints of the first stage (the MuMath models). The
one-stage training, directly applies the second stage
of training on the base models. On both settings,
we vary the data volumes of Dµ-code. Table 2 illustrates the performance comparison of models
derived from both strategies across different data
volumes, revealing that training solely on Dµ-code
is worse than the two-stage training. Furthermore,
by merging the training data from both stages into
a single dataset for one-stage training, we observe
that the outcomes are still not as favorable as those
obtained from two separate training stages.
To further validate the effectiveness of our twostage training strategy, we select MetaMath (Yu
et al., 2023) and Xwin-Math (Team, 2023) 7B
models as the initial checkpoints for Stage-2 training, emulating the scenario where relevant datasets
were employed during the first stage (Given the
the unavailability of the most recent models and
dataset proposed in (Li et al., 2024), we opt to
utilize Xwin-Math-7B-V1.0 detailed in the corresponding GitHub repository). Table 2 illustrates
that models fine-tuned from MetaMath and XwinMath checkpoints on Dµ-code (two-stage) outperform the one directly trained from Llama (singlestage), verifying the efficacy of a two-stage training
strategy as well as the compatibility of our Dµ-code
with different first-stage CoT datasets.
**5.4** **Ablation Studies**
To verify our proposed prefix CoT and code debugging, we respectively modify the solutions in
_Dµ-code via two distinct approaches: the first ap-_
proach involves the removal of the prefix CoT,
thereby eliminating the detailed preliminary analysis and directly begining with code writing; the
second approach consists of retaining only the final
and successfully executed code and omitting all
the other inexecutable code before as well as the
corresponding debugging process. The results of
this ablation study are presented in Table 3, which
demonstrates that the exclusion of either the prefix
CoT or code debugging leads to a decline in the
models’ test accuracy. This emphatically underscores the significance of a thorough analysis prior
to code writing and the code mistake correction
process for the models’ learning.
Moreover, we conduct another ablation experiment on pseudo-answer guidance filtering. In Sec
-----
Model GSM8K MATH GSM-Hard SVAMP TabMWP ASDiv MAWPS
_colsed-source LLMs_
Claud-3 Opus (Anthropic, 2024) 95.0 60.1 - - - -
GPT-4 (OpenAI, 2023b) 92.0 42.5 64.7 93.1 67.1 91.3 97.6
GPT-4 (PAL) 94.2 51.8 77.6 94.8 95.9 92.6 97.7
GPT-3.5 (OpenAI, 2023a) 80.8 35.5 55.9 83.0 69.1 87.3 94.6
GPT-3.5 (PAL) 78.6 38.7 67.6 77.8 79.9 81.0 89.4
_tool-free open LLMs_
_7B_
LLaMA-2 (Touvron et al., 2023b) 13.3 4.1 7.8 38.0 31.1 50.7 60.9
LLaMA-2 SFT (Touvron et al., 2023b) 41.3 7.2 16.1 31.9 27.8 47.4 60.0
WizardMath (Luo et al., 2023a) 54.9 10.7 20.6 57.3 38.1 59.1 73.7
MetaMath (Yu et al., 2023) 66.5 19.8 - - - - -
MuggleMath (Li et al., 2023) 68.4 - - - - - -
MuMath (You et al., 2024) 70.9 22.0 - 76.8 - 93.6 87.3
_13B_
LLaMA-2 (Touvron et al., 2023b) 24.3 6.3 13.6 43.1 39.5 56.3 70.4
LLaMA-2 SFT (Touvron et al., 2023b) 51.1 9.2 22.3 46.3 35.8 58.6 75.0
WizardMath (Luo et al., 2023a) 63.9 14.0 28.4 64.3 46.7 65.8 79.7
MetaMath (Yu et al., 2023) 72.3 22.4 - - - - -
MuggleMath (Li et al., 2023) 74 - - - - - -
MuMath (You et al., 2024) 76.4 25.3 - - - - -
_70B_
LLaMA-2 (Touvron et al., 2023b) 57.8 14.4 36.0 73.6 57.5 76.0 92.4
LLaMA-2 SFT (Touvron et al., 2023b) 69.3 14.9 39.0 64.0 53.0 71.3 84.8
WizardMath (Luo et al., 2023a) 81.6 22.7 50.3 80.0 49.8 76.2 86.2
MetaMath(Yu et al., 2023) 82.3 26.6 - - - - -
MuggleMath (Li et al., 2023) 82.3 - - - - - -
MuMath (You et al., 2024) 84.5 32.2 - 87.6 - 96.6 92.0
_tool-use open LLMs_
_7B_
MAmmoTH (Yue et al., 2023) 53.6 31.5 - 67.7 - - -
MAmmoTH-Coder 59.4 33.4 - 71.4 - - -
CodeLLama (PAL) (Rozière et al., 2023) 34.0 16.6 33.6 59.0 47.3 61.4 79.6
MathCoder-L (Wang et al., 2023) 64.2 23.3 - 71.5 - - -
MathCoder-CL (Wang et al., 2023) 67.8 30.2 - 70.7 - - -
ToRA (Gou et al., 2023) 68.8 40.1 54.6 68.2 42.4 73.9 88.8
ToRA-Code (Gou et al., 2023) 72.6 44.6 56.0 70.4 51.6 78.7 91.3
**MuMath-Code-L** **83.8** 48.8 70.5 87.6 65.6 86.2 94.7
**MuMath-Code-CL** 82.6 **52.4** **70.6** **88.1** **66.9** **87.4** **95.3**
_13B_
MAmmoTH (Yue et al., 2023) 62.0 34.2 - 72.4 - - -
MAmmoTH-Coder(Yue et al., 2023) 64.7 36.3 - 73.7 - - -
CodeLlama (PAL) (Rozière et al., 2023) 39.9 19.9 39.0 62.4 59.5 65.3 86.0
MathCoder-L (Wang et al., 2023) 72.6 29.9 - 76.9 - - -
MathCoder-CL (Wang et al., 2023) 74.1 35.9 - 78.0 - - -
ToRA (Gou et al., 2023) 72.7 43.0 57.3 72.9 47.2 77.2 91.3
ToRA-Code (Gou et al., 2023) 75.8 48.1 60.5 75.7 65.4 81.4 92.5
**MuMath-Code-L** 84.3 49.9 70.6 **87.9** 64.9 **86.4** 94.9
**MuMath-Code-CL** **84.6** **53.1** **70.8** 86.8 **67.2** 85.2 **95**
_34B_
CodeLLaMa (PAL) (Rozière et al., 2023) 53.3 23.9 49.4 71.0 63.1 72.4 91.5
MAmmoTH-Coder (Yue et al., 2023) 72.7 43.6 - 84.3 - - -
MathCoder-CL (Wang et al., 2023) 81.7 45.2 - 82.5 - - -
ToRA (Gou et al., 2023) 80.7 50.8 63.7 80.5 70.5 84.2 **93.3**
**MuMath-Code-CL** **87.6** **55.0** **68.8** **91.4** **74.9** **87.9** 92.9
_70B_
LLaMA-2 (PAL) 55.2 18.3 50.0 74.6 59.5 71.9 92.8
MAmmoTH (Yue et al., 2023) 76.9 41.8 - 82.4 - - -
MathCoder-L (Wang et al., 2023) 83.9 45.1 - 84.9 - - -
ToRA (Gou et al., 2023) 84.3 49.7 67.2 82.7 74.0 86.8 93.8
**MuMath-Code-L** **90.7** 52.8 68.6 **93** 74 **88.4** **95.4**
**MuMath-Code-CL** 89.5 **55.1** **70.1** 92.9 **77.4** 87.9 94.7
Table 1: Comparison of the state-of-the-art methods on various datasets. For the tool-use open LLMs, the best
results are bolded and the second best underlined among the same scale models tested on the same datasets.
-----
(a) Test on GSM8K (b) Test on MATH
Figure 3: Scaling all the subsets of MuMath-Code-Data. The models undergo a single stage (only Stage-2) of
training.
LLaMA CodeLlama
Data Size Pseudo-Answer
GSM8K MATH GSM8K MATH
w 65.4 33.4 67.3 38.5
30K
w/o 65.9 32.4 67.6 36.9
w 71.4 37.6 73.4 42.7
60K
w/o 71.2 36.6 72.3 40.4
w 75.1 39.3 75.2 44.5
90K
w/o 74.8 37.9 74.2 41.9
w 76.1 40.7 76.9 45.7
120K
w/o 74.6 40.3 75.9 43.6
w 77.8 42.7 76.8 46
150K
w/o 75.8 41.5 76.4 45
w 77.3 43.5 78.5 47.3
180K
w/o 76.7 42.7 77.7 46.5
Table 4: Ablation study for pseudo-answer guidance
filtering.
data volumes ranging from 30K to 180K.
**5.5** **Scaling Study**
The scaling experiments for various subsets of
the MuMath-Code-Data are depicted in Figure 3.
These curves represent the performance changes
of models trained on different data subsets with
respect to the number of samples. The base model
is LLaMA 7B and it is directly trained on the subsets of Dµ-code (single-stage training). It is evident
that with the increase in data volume, all subsets
continuously contribute to the enhancement of the
models’ performance, and the curves still do not
show saturation. This indicates that employing our
methodology allows for the continued addition of
data to further improve the LLMs’ mathematical
reasoning capabilities. For the two-stage scenario
where the initial model is an intermediate checkpoint from Stage-1, please refer to Appendix B.
LLaMA CodeLlama
Inference Training Strategy
GSM8K MATH GSM8K MATH
_meta_ 66.5 19.8 - -
Tool free _D_
_xwin_ 66.6 17.4 - -
_D_
_Dµ-code_ 81.2 46.2 81 49.8
_Dµ + Dµ-code_ 82.7 47.1 81.3 49.1
Tool use _Dmeta →Dµ-code_ 82.3 47.4 - -
_Dxwin →Dµ-code_ 82.0 47.2 - -
_Dµ →Dµ-code_ 83.8 48.8 82.6 52.4
Table 2: A two-stage training strategy improves the
models’ performance, as opposed to a single-stage training.
LLaMA CodeLlama
Sythesized Solutions
GSM8K MATH GSM8K MATH
w all 83.8 48.8 82.6 52.4
w/o prefix CoT 81.3 47.5 81.8 49.4
w/o code debugging 82 47.1 82.1 52.1
w/o either 81.0 46.8 81.3 49.0
Table 3: Ablation study for prefix CoT and code debugging.
tion 4.1, we note that pseudo-answers are suitable
for synthetic questions that lack a definitive correct answer, namely those in Qalter and Qreplace.
In MuMath, majority voting is utilized to assign
pseudo-answers to these questions. These pseudoanswers are then also employed to filter the data for
_Dµ-code in the second training stage. As illustrated_
in Table 4, fine-tuning the model with data filtered
through this pseudo-answer technique proves to
be more beneficial than solutions obtained through
directly random sampling. This trend holds across
-----
**6** **Conclusion**
In this paper, we propose a multi-perspective
and code integrated math reasoning dataset called
MuMath-Code-Data, where each solution contains
multi-turn code generation, code execution and
pure natural language analysis (CoT). Through
a two-stage training strategy, our MuMath-Code
models outperforms the state-of-the-art open methods and even some powerful proprietary ones
across different scales on the in-domain reasoning datasets (i.e., GSM8K and MATH) as well as
those out-of-domain ones. Additionally, ablation
studies demonstrates the effectiveness of our three
novel methods for the data synthesis: prefix CoT,
code debugging and pseudo-answer guidance filtering. Our work represents a new attempt at integrating mathematical question augmentation (tool-free)
with code generation and execution (tool-use) to
enhance the mathematical reasoning capabilities
of LLMs, and we hope it can inspire subsequent
research endeavors.
**7** **Acknowledgments**
This work was supported by National Key
R&D Program of China under Grant No.
2020AAA0104500, HY Project under Grant No.
LZY2022033004, the Natural Science Foundation of Shandong Province under Grants No.
ZR2020MF131 and No. ZR2021ZD19, the Science and Technology Program of Qingdao under
Grant No. 21-1-4-ny-19-nsh, and Project of Associative Training of Ocean University of China
under Grant No. 202265007.
**References**
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, et al. 2023. Palm 2 technical report. arXiv
_preprint arXiv:2305.10403._
Anthropic. 2024. The claude 3 model family: Opus, sonnet, haiku. [https://www.anthropic.com/news/](https://www.anthropic.com/news/claude-3-family)
[claude-3-family.](https://www.anthropic.com/news/claude-3-family)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, T. J. Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens
Winter, Christopher Hesse, Mark Chen, Eric Sigler,
Mateusz Litwin, Scott Gray, Benjamin Chess, Jack
Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. 2020.
[Language models are few-shot learners.](https://api.semanticscholar.org/CorpusID:218971783) _ArXiv,_
abs/2005.14165.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming
Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, Alex Ray, Raul Puri, Gretchen
Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray,
Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz
Kaiser, Mohammad Bavarian, Clemens Winter,
Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen
Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie
Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain,
William Saunders, Christopher Hesse, Andrew N.
Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan
Morikawa, Alec Radford, Matthew Knight, Miles
Brundage, Mira Murati, Katie Mayer, Peter Welinder,
Bob McGrew, Dario Amodei, Sam McCandlish, Ilya
[Sutskever, and Wojciech Zaremba. 2021. Evaluating](http://arxiv.org/abs/2107.03374)
[large language models trained on code.](http://arxiv.org/abs/2107.03374)
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2023. [Program of thoughts](http://arxiv.org/abs/2211.12588)
[prompting: Disentangling computation from reason-](http://arxiv.org/abs/2211.12588)
[ing for numerical reasoning tasks.](http://arxiv.org/abs/2211.12588)
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, Parker Schuh, Kensen Shi,
Sasha Tsvyashchenko, Joshua Maynez, Abhishek
Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben
Hutchinson, Reiner Pope, James Bradbury, Jacob
Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin,
Toju Duke, Anselm Levskaya, Sanjay Ghemawat,
Sunipa Dev, Henryk Michalewski, Xavier Garcia,
Vedant Misra, Kevin Robinson, Liam Fedus, Denny
Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim,
Barret Zoph, Alexander Spiridonov, Ryan Sepassi,
-----
David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira,
Rewon Child, Oleksandr Polozov, Katherine Lee,
Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark
Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy
Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov,
[and Noah Fiedel. 2022. Palm: Scaling language mod-](http://arxiv.org/abs/2204.02311)
[eling with pathways.](http://arxiv.org/abs/2204.02311)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
[Kristina Toutanova. 2019. BERT: Pre-training of](https://doi.org/10.18653/v1/N19-1423)
[deep bidirectional transformers for language under-](https://doi.org/10.18653/v1/N19-1423)
[standing. In Proceedings of the 2019 Conference of](https://doi.org/10.18653/v1/N19-1423)
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
4171–4186, Minneapolis, Minnesota. Association for
Computational Linguistics.
Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng,
[and Tushar Khot. 2023a. Chain-of-thought hub: A](http://arxiv.org/abs/2305.17306)
[continuous effort to measure large language models’](http://arxiv.org/abs/2305.17306)
[reasoning performance.](http://arxiv.org/abs/2305.17306)
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and
Tushar Khot. 2023b. Specializing smaller language
models towards multi-step reasoning. arXiv preprint
_arXiv:2301.12726._
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Gra[ham Neubig. 2023. Pal: Program-aided language](http://arxiv.org/abs/2211.10435)
[models.](http://arxiv.org/abs/2211.10435)
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen,
Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu
[Chen. 2023. Tora: A tool-integrated reasoning agent](http://arxiv.org/abs/2309.17452)
[for mathematical problem solving.](http://arxiv.org/abs/2309.17452)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021a. Measuring mathematical
problem solving with the MATH dataset.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021b. Measuring mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874._
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023.
[Large language models are reasoning teachers.](http://arxiv.org/abs/2212.10071)
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu,
Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022.
[Large language models can self-improve.](http://arxiv.org/abs/2210.11610)
Weisen Jiang, Han Shi, Longhui Yu, Zhengying Liu,
Yu Zhang, Zhenguo Li, and James T. Kwok. 2023a.
[Forward-backward reasoning in large language mod-](http://arxiv.org/abs/2308.07758)
[els for mathematical verification.](http://arxiv.org/abs/2308.07758)
[Weisen Jiang, Yu Zhang, and James Kwok. 2023b. Ef-](https://proceedings.mlr.press/v202/jiang23k.html)
[fective structured prompting by meta-learning and](https://proceedings.mlr.press/v202/jiang23k.html)
[representative verbalizer. In Proceedings of the 40th](https://proceedings.mlr.press/v202/jiang23k.html)
_International Conference on Machine Learning, vol-_
ume 202 of Proceedings of Machine Learning Re_search, pages 15186–15199. PMLR._
Aitor Lewkowycz, Anders Andreassen, David Dohan,
Ethan Dyer, Henryk Michalewski, Vinay Ramasesh,
Ambrose Slone, Cem Anil, Imanol Schlag, Theo
Gutman-Solo, et al. 2022. Solving quantitative reasoning problems with language models. Advances
_in Neural Information Processing Systems, 35:3843–_
3857.
Chen Li, Weiqi Wang, Jingcheng Hu, Yixuan Wei, Nanning Zheng, Han Hu, Zheng Zhang, and Houwen
Peng. 2024. Common 7b language models already
possess strong math capabilities. _arXiv preprint_
_arXiv:2403.04706._
Chengpeng Li, Zheng Yuan, Hongyi Yuan, Guanting
Dong, Keming Lu, Jiancan Wu, Chuanqi Tan, Xiang
[Wang, and Chang Zhou. 2023. Query and response](http://arxiv.org/abs/2310.05506)
[augmentation cannot help out-of-domain math rea-](http://arxiv.org/abs/2310.05506)
[soning generalization.](http://arxiv.org/abs/2310.05506)
Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen,
Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian,
Baolin Peng, Yi Mao, Wenhu Chen, and Xifeng
[Yan. 2022. Explanations from large language models](http://arxiv.org/abs/2210.06726)
[make small reasoners better.](http://arxiv.org/abs/2210.06726)
Shayne Longpre, Le Hou, Tu Vu, Albert Webson,
Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V
Le, Barret Zoph, Jason Wei, et al. 2023. The flan
collection: Designing data and methods for effective
instruction tuning. arXiv preprint arXiv:2301.13688.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023a. Wiz-](http://arxiv.org/abs/2308.09583)
[ardmath: Empowering mathematical reasoning for](http://arxiv.org/abs/2308.09583)
[large language models via reinforced evol-instruct.](http://arxiv.org/abs/2308.09583)
Ziyang Luo, Can Xu, Pu Zhao, Qingfeng Sun, Xiubo
Geng, Wenxiang Hu, Chongyang Tao, Jing Ma, Qingwei Lin, and Daxin Jiang. 2023b. Wizardcoder:
Empowering code large language models with evolinstruct. arXiv preprint arXiv:2306.08568.
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2023.
[Teaching small language models to reason.](http://arxiv.org/abs/2212.08410)
Sewon Min, Mike Lewis, Luke Zettlemoyer, and Han[naneh Hajishirzi. 2022. MetaICL: Learning to learn](https://doi.org/10.18653/v1/2022.naacl-main.201)
[in context. In Proceedings of the 2022 Conference of](https://doi.org/10.18653/v1/2022.naacl-main.201)
_the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, pages 2791–2809, Seattle, United States._
Association for Computational Linguistics.
-----
OpenAI. 2023a. Chatgpt: Optimizing language
[models for dialogue. https://openai.com/blog/](https://openai.com/blog/chatgpt)
[chatgpt.](https://openai.com/blog/chatgpt)
[OpenAI. 2023b. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
A. Radford, J. Wu, R. Child, D. Luan, D. Amodei,, and
I. Sutskever. 2019. Language models are unsupervised multitask learners. Technical Report.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
[Wei Li, and Peter J. Liu. 2023. Exploring the limits](http://arxiv.org/abs/1910.10683)
[of transfer learning with a unified text-to-text trans-](http://arxiv.org/abs/1910.10683)
[former.](http://arxiv.org/abs/1910.10683)
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle,
Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom
Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish
Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal
Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier,
[Thomas Scialom, and Gabriel Synnaeve. 2023. Code](http://arxiv.org/abs/2308.12950)
[llama: Open foundation models for code.](http://arxiv.org/abs/2308.12950)
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya
[Sachan. 2023. Distilling reasoning capabilities into](https://aclanthology.org/2023.findings-acl.441)
[smaller language models. In Findings of the Asso-](https://aclanthology.org/2023.findings-acl.441)
_ciation for Computational Linguistics: ACL 2023,_
pages 7059–7073, Toronto, Canada. Association for
Computational Linguistics.
[Xwin-Math Team. 2023. Xwin-math.](https://github.com/Xwin-LM/Xwin-LM/Xwin-Math)
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aurelien Rodriguez, Armand Joulin, Edouard
[Grave, and Guillaume Lample. 2023a. Llama: Open](http://arxiv.org/abs/2302.13971)
[and efficient foundation language models.](http://arxiv.org/abs/2302.13971)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023b. Llama 2: Open foundation and](http://arxiv.org/abs/2307.09288)
[fine-tuned chat models.](http://arxiv.org/abs/2307.09288)
Alex Wang, Amanpreet Singh, Julian Michael, Felix
Hill, Omer Levy, and Samuel R Bowman. 2018.
Glue: A multi-task benchmark and analysis platform
for natural language understanding. arXiv preprint
_arXiv:1804.07461._
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun
Luo, Weikang Shi, Renrui Zhang, Linqi Song,
[Mingjie Zhan, and Hongsheng Li. 2023. Mathcoder:](http://arxiv.org/abs/2310.03731)
[Seamless code integration in llms for enhanced math-](http://arxiv.org/abs/2310.03731)
[ematical reasoning.](http://arxiv.org/abs/2310.03731)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural
_Information Processing Systems, 35:24824–24837._
Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He,
Shengping Liu, Bin Sun, Kang Liu, and Jun Zhao.
[2023. Large language models are better reasoners](https://aclanthology.org/2023.findings-emnlp.167)
[with self-verification. In Findings of the Associa-](https://aclanthology.org/2023.findings-emnlp.167)
_tion for Computational Linguistics: EMNLP 2023,_
pages 2550–2575, Singapore. Association for Computational Linguistics.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng,
Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin
[Jiang. 2023. Wizardlm: Empowering large language](http://arxiv.org/abs/2304.12244)
[models to follow complex instructions.](http://arxiv.org/abs/2304.12244)
Weihao You, Shuo Yin, Zhilong Ji, Xudong Zhao, Guo[qiang Zhong, and Jinfeng Bai. 2024. Mumath: Multi-](https://openreview.net/forum?id=6Du1MIwLg4)
[perspective data augmentation for mathematical rea-](https://openreview.net/forum?id=6Du1MIwLg4)
[soning in large language models. In 2024 Annual](https://openreview.net/forum?id=6Du1MIwLg4)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics._
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo
[Li, Adrian Weller, and Weiyang Liu. 2023. Meta-](http://arxiv.org/abs/2309.12284)
[math: Bootstrap your own mathematical questions](http://arxiv.org/abs/2309.12284)
[for large language models.](http://arxiv.org/abs/2309.12284)
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and
[Jingren Zhou. 2023. Scaling relationship on learning](http://arxiv.org/abs/2308.01825)
[mathematical reasoning with large language models.](http://arxiv.org/abs/2308.01825)
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao
Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023.
[Mammoth: Building math generalist models through](http://arxiv.org/abs/2309.05653)
[hybrid instruction tuning.](http://arxiv.org/abs/2309.05653)
-----
**Examples**
**Example A.1: MuMath-Code-Data QA Pair**
**Question:**
What is the smallest whole number that has a remainder of 1 when divided by 4, a remainder of 1 when
divided by 3, and a remainder of 2 when divided by 5?
**Solution**
We need to find a positive integer solution for the system of linear congruences. We can use the Chinese
Remainder Theorem to solve this system. The Chinese Remainder Theorem states that for any system
of equations like this, there exists a solution. Let’s find the solution using sympy’s solve_congruence
function.
```python
from sympy.ntheory.modular import solve_congruence
def smallest_number():
congruences = [(’1’, ’4’), (’1’, ’3’), (’2’, ’5’)]
result = solve_congruence(*congruences)
return result[0]
result = smallest_number()
print(result)
```
```output
ValueError: 1 is not an integer
```
The code above has encountered a problem. Now point out its mistakes and then correct them.
Mistake:
The error is in the format of the input to the function ‘solve_congruence‘. The function is expecting
tuples of the format ‘(remainder, modulus)‘, but the inputs need to be integers, not strings.
Correction:
Let’s modify the code to convert the strings to integers.
```python
def smallest_number():
congruences = [(1, 4), (1, 3), (2, 5)]
result = solve_congruence(*congruences)
return result[0]
result = smallest_number()
print(result)
```
```output
NameError: name ’solve_congruence’ is not defined
```
The code above has encountered a problem. Now point out its mistakes and then correct them.
Mistake:
The error is that the ‘solve_congruence‘ function is not defined. It should be imported from the ‘sympy‘
module before it can be used.
-----
Correction:
Let’s modify the code to import the ‘solve_congruence‘ function from the ‘sympy‘ module.
```python
from sympy.ntheory.modular import solve_congruence
def smallest_number():
congruences = [(1, 4), (1, 3), (2, 5)]
result = solve_congruence(*congruences)
return result[0]
result = smallest_number()
print(result)
```
```output
37
```
The smallest whole number that has a remainder of 1 when divided by 4, a remainder of 1 when divided
by 3, and a remainder of 2 when divided by 5 is $\\boxed{37}$.
**Additional Experimental Results**
LLaMA CodeLlama
_Dµ-code_ Training Strategy GSM8K MATH GSM8K MATH
100K _Dµ-code_ 69.7 36.1 72.4 41
_Dµ →Dµ-code_ 77.1 41.5 80.3 46.1
200K _Dµ-code_ 76.2 41.4 78.3 44.2
_Dµ →Dµ-code_ 80.4 46 80.7 49.1
300K _Dµ-code_ 77.1 43.7 78.2 46.8
_Dµ →Dµ-code_ 79.5 46.4 83.2 50.2
400K _Dµ-code_ 78 44.3 79 47.8
_Dµ →Dµ-code_ 81.6 48.5 81.9 50.9
500K _Dµ-code_ 79.8 45.7 80.2 48.9
_Dµ →Dµ-code_ 82.8 48.7 82.6 52.2
600K _Dµ-code_ 81.2 46.2 81 49.8
_Dµ →Dµ-code_ 83.8 48.8 82.6 52.4
Table 5: We vary the data volumes of Dµ-code. It is observed that training solely on Dµ-code is consistently inferior
to the two-stage training across all data volumes.
-----
(a) Test on GSM8K (b) Test on MATH
Figure 4: Scaling all the subsets of MuMath-Code-Data. The model has already been finetuned on MuMath-Data. It
is observable that the curves show very similar trends to those in Figure 3.
-----
| [
"Shuo, Yin",
"Weihao, You",
"Zhilong, Ji",
"Guoqiang, Zhong",
"Jinfeng, Bai"
] | 2024-05-13T00:00:00 | NAACL 2024 Findings | false | 1 | 0 | null | http://arxiv.org/abs/2405.07551 | https://arxiv.org/abs/2405.07551 | https://www.semanticscholar.org/paper/512a5c307fdab29112a0f4af5c94a3436632eda1 |
Multiple-Choice Questions are Efficient and Robust LLM Evaluators | We present GSM-MC, a multiple-choice (MC) dataset constructed by collecting answers and incorrect predictions on GSM8K from 60 open-source models. Through extensive experiments, we show that LLMs' performance on the MC version of this popular benchmark is strongly correlated with their performance on the original version and is quite robust to distractor choices and option orders, while the evaluation time is reduced by a factor of up to 30. Following similar procedures, we introduce MATH-MC, constructed from MATH, and PythonIO, a new program reasoning MC dataset constructed from HumanEval and MBPP. Experimental results indicate that LLMs' performance on these MC benchmarks leaves much room for improvement. Our data and code are available at https://github.com/Geralt-Targaryen/MC-Evaluation. | GSM-MC, a multiple-choice (MC) dataset constructed by collecting answers and incorrect predictions on GSM8K from 60 open-source models, is presented and MATH-MC, constructed from MATH, and PythonIO, a new program reasoning MC dataset constructed from HumanEval and MBPP are introduced. | ## Multiple-Choice Questions are Efficient and Robust LLM Evaluators
**Ziyin Zhang** **Zhaokun Jiang** **Lizhen Xu** **Hongkun Hao** **Rui Wang[*]**
Shanghai Jiao Tong University
{daenerystargaryen, wangrui12}@sjtu.edu.cn
**Abstract**
We present GSM-MC, a multiple-choice (MC)
dataset constructed by collecting answers and
incorrect predictions on GSM8K from 60 opensource models. Through extensive experiments, we show that LLMs’ performance on
the MC version of this popular benchmark
is strongly correlated with their performance
on the original version and is quite robust to
distractor choices and option orders, while
the evaluation time is reduced by a factor
of up to 30. Following similar procedures,
we introduce MATH-MC, constructed from
MATH, and PythonIO, a new program reasoning MC dataset constructed from HumanEval
and MBPP. Experimental results indicate that
LLMs’ performance on these MC benchmarks
leaves much room for improvement. Our data
[and code are available at https://github.](https://github.com/Geralt-Targaryen/MC-Evaluation)
[com/Geralt-Targaryen/MC-Evaluation.](https://github.com/Geralt-Targaryen/MC-Evaluation)
**Original question:**
Natalia sold 4 clips in April, and half as many in May. How
many clips did she sell altogether in April and May?
**Answer:**
Natalia sold 4/2 = 2 clips in May.
Natalia sold 4+2 = 6 clips altogether in April and May.
#### 6
**Model predictions :**
(1) #### 6 ✅
(2) #### 4. ❌
(3) ### 6 ❓
(4) She sold six clips in total. ❓
(5) Let’s write a program to solve it!
print(4 + 4 / 2) ❓
**MC Question:**
Natalia sold 4 clips in April, and half as many in May. How
many clips did she sell altogether in April and May?
A. 4
B. 6
C. 2
D. 8
Answer:
**Softmax over model logits:**
(1) [0.2, 0.4, 0.3, 0.1] ✅
(2) [0.4, 0.2, 0.2, 0.2] ❌
**1** **Introduction**
MMLU (Hendrycks et al., 2021a), GSM8K (Cobbe
et al., 2021), MATH (Hendrycks et al., 2021b), HumanEval (Chen et al., 2021), and MBPP (Austin
et al., 2021) are currently the de facto most popular
benchmarks for evaluating LLMs. Among these
benchmarks, only MMLU is in multiple-choice
(MC) format, where model predictions can be efficiently extracted from output logits. In the other
benchmarks, the models are typically evaluated by
open-ended generation, from which the answers
are extracted.
However, as shown in Figure 1, LLMs may not
always follow the required answer format during
generation, which results in many false negatives
when the answers are heuristically extracted from
model generations and evaluated by exact match,
as in GSM8K and MATH.
To tackle this issue, in this work we investigate whether short-answer generation benchmarks
Figure 1: An illustrative example of correct, incorrect,
and invalid answers to one question from GSM8K (top).
After converting to multiple-choice format (bottom), a
prediction can always be extracted from model logits.
like GSM8K and MATH can be converted into a
multiple-choice format to prevent invalid answers
like those in Figure 1 from affecting the evaluation accuracy of LLMs. Using GSM8K as a proofof-concept example, we collect incorrect predictions from 60 open-source models to construct
a pool of distractors for each problem and convert the problems into an MC format similar to
MMLU (which we dub GSM-MC). Through extensive experiments involving different numbers
of choices (Section 3.4.1) and robustness against
different distractor choices and option orders (Section 3.4.2), we show that LLMs’ performance on
GSM-MC is robust to distractors and option orders,
and strongly correlated with the performance on
the original GSM8K regardless of choice numbers
*Corresponding author.
-----
(ranging from 2 to 8).
Inspired by the success of converting GSM8K to
GSM-MC, we repeat the same procedure on MATH
to construct MATH-MC. The two coding benchmarks, however, can not be naively converted into
MC format in the same way, which would result
in outrageously long and very unnatural questions.
Thus we follow one recent work (Gu et al., 2024)
and convert them into a program output prediction task instead, which we name PythonIO. An
overview of these datasets is provided in Table 1.
**2** **Related Work**
The evaluation of LLMs can be categorized as either generation-based or multiple-choice-based. To
compute a model’s score on one specific generation sample - such as a math word problem in
GSM8K (Cobbe et al., 2021) or one program synthesis problem in HumanEval (Chen et al., 2021) there are several classes of metrics: 1) the first one
is based on content overlap such as exact match,
BLEU (Papineni et al., 2002), and ROUGE (Lin,
2004); 2) the second one is based on model-scoring,
either by computing similarities between model
representations (e.g. BERTScore Zhang et al.,
2020), or by regression (e.g. BLEURT Sellam
et al., 2020), or by directly asking a powerful LLM
to grade the sample (Zheng et al., 2023); 3) the third
one, specific to code, is based on functional correctness, where a generated program is run against a set
of tests to verify their functions (Chen et al., 2021;
Zhang et al., 2023). Most reasoning-heavy evaluations - such as math and coding benchmarks, where
a small lexical difference in the answer could completely change its semantics - adopt the first and the
third types of metrics. However, these metrics also
require rigorous and possibly labor-intensive postprocessing of model generations to work correctly,
as exemplified in Figure 1.
On the other hand, to evaluate a model
on one MC question - such as one from
MMLU (Hendrycks et al., 2021a) - a binary score is
typically computed by checking whether the model
assigns the largest probability to the correct option
id (e.g. “A”) among all the option ids. While some
early works used other methods such as concatenating the content of each option to the question and
comparing their likelihood (Brown et al., 2020), it
has been argued in the literature that such methods underperform compared with directly asking
the model to output the answer id (Robinson and
Wingate, 2023).
Recently Several works have studied the effectiveness and robustness of evaluating LLMs on
MC benchmarks. Savelka et al. (2023) evaluated
GPT models on questions from a Python programming course, finding the models to struggle with
questions that require analysis and reasoning about
the code, such as output prediction. Zheng et al.
(2024) analyzed 20 LLMs’ performance on MMLU
and other MC benchmarks, finding that LLMs a
priori assign higher probability to certain answer
ids. Wang et al. (2024) also investigated LLMs’
performance on MMLU, finding them to be sensitive to the ordering of the four options in the question. However, all these works focused on existing
MC benchmarks. To our knowledge, no work has
explored the possibility of converting generation
benchmarks into MC format.
**3** **Converting GSM8K to Multiple-Choice**
**Format**
We first use GSM8K - which is relatively small in
size and can be straightforwardly converted into an
MC format - as a proof-of-concept example and
validate the rationality of converting short-answer
generation benchmarks into MC format.
**3.1** **A Closer Look at LLMs’ Performance on**
**GSM8K**
Using the original prompt format provided by
Cobbe et al. (2021), we evaluated a series of
open-source LLMs including Qwen1.5 (Bai et al.,
2023), LLaMA 2 and 3 (Touvron et al., 2023),
Mistral (Jiang et al., 2023), Gemma (Mesnard
et al., 2024), Phi 1-3 (Gunasekar et al., 2023; Li
et al., 2023b; Abdin et al., 2024), ChatGLM3 (Zeng
et al., 2023), Flan-T5 (Raffel et al., 2020; Chung
et al., 2022), Pythia (Biderman et al., 2023), and
BLOOM (Scao et al., 2022; Muennighoff et al.,
2023) on GSM8K. The results indicate that a nonnegligible portion of the wrong answers arises from
failure to parse model outputs, as shown in Figure 2.
Inspecting the invalid answers, we identify three
most common causes: meaningless repetition, not
highlighting the answer in the correct format, and
writing programs instead of solving the problems
directly. More details about the prompt and sample
outputs can be found in Figure 8, 9 in Appendix B.
In the main experiments we used greedy decoding for all the evaluated models and did not apply
any chat or instruction template for the aligned ver
-----
Benchmark Training Samples Test Samples Source Domain
GSM-MC 7468 1319 GSM8K grade school math word problems
MATH-MC 7278 4914 MATH high school math competitions
PythonIO 966 1684 HumanEval, MBPP Python program output prediction
Table 1: Overview of our MC datasets.
correct
incorrect
1200 invalid
1000
800
600
400
200
0
Qwen1.5-0.5BQwen1.5-1.8BQwen1.5-4BQwen1.5-7BQwen1.5-14BQwen1.5-32BQwen1.5-72BQwen1.5-0.5B-ChatQwen1.5-1.8B-ChatQwen1.5-4B-ChatQwen1.5-7B-ChatQwen1.5-14B-ChatQwen1.5-32B-ChatQwen1.5-72B-ChatMistral-7BMistral-7B-InstructLLaMA-2-7BLLaMA-2-13BLLaMA-2-70BLLaMA-2-7B-ChatLLaMA-2-13B-ChatLLaMA-2-70B-ChatLLaMA-3-8BLLaMA-3-70BLLaMA-3-8B-InstructLLaMA-3-70B-InstructGemma-2BGemma-7BGemma-2B-itGemma-7B-itPhi-1Phi-1.5Phi-2Phi-3 ChatGLM3-6B-BaseChatGLM3-6BPythia-70MPythia-160MPythia-410MPythia-1BPythia-1.4BPythia-2.8BPythia-6.9BPythia-12BFlan-T5-60MFlan-T5-220MFlan-T5-770MFlan-T5-3BFlan-T5-11BFlan-UL2BLOOM-0.56BBLOOM-1.1BBLOOM-1.7BBLOOM-3BBLOOM-7BBLOOMZ-0.56BBLOOMZ-1.1BBLOOMZ-1.7BBLOOMZ-3BBLOOMZ-7B
Figure 2: LLMs’ answer distributions on GSM8K. Smaller models and aligned models tend to produce more invalid
answers.
Mistral-7B-Inst LLaMA-3-8B-Inst Phi-3
correct
incorrect
invalid
Figure 3: Comparison of answer distribution by aligned
models with (top) and without (bottom) applying the
instruction template.
sions. In early experiments, We also evaluated the
instruct versions of LLaMA-3, Mistral, and Phi3 with their respective instruction template. As
shown in Figure 3, these templates lead to significantly more invalid responses, and also fewer correct answers for Mistral and LLaMA. We hypothesize that the instruction templates (for example,
prepending [INST] and appending [/INST] to the
prompt) interrupt the logical flow established by
the consecutive in-context examples and make it
harder for the models to follow the desired format.
**3.2** **Converting to Multiple-Choice Format**
To tackle the issue presented in Section 3.1, we
collected all the valid but incorrect answers produced by the evaluated models as a distractor pool
for each problem in the benchmark. We then constructed a new dataset following the format of
MMLU (see Figure 10 in Appendix B). We also
additionally generated distractors for the training
set to facilitate future research.
After converting to MC format, the evaluation
process no longer involves auto-regressive generation but simplifies into one softmax operation over
the option tokens’ corresponding output logits at
the end of the prompt. This leads to a significant reduction in computation cost: evaluating Qwen1.532B on the original GSM8K dataset takes 7 hours
on our machine (distributed across 3 RTX 3090)
while evaluating it on the newly constructed 4-way
MC dataset takes only 13 minutes on the same
-----
machine.
**3.3** **Can LLMs Understand Multiple-Choice**
**Questions?**
As previously mentioned, one advantage of
multiple-choice questions is that they enable the
evaluation of any language model on any subject
by simply comparing the output logits of the tokens
“A”, “B”, “C”, “D”. However, the output logits of
models are distributed over the entire vocabulary
instead of only these option ids, and it remains
unclear whether LLMs understand the multiplechoice format and tend to produce these tokens
over other irrelevant tokens in the vocabulary. Thus,
we first evaluated several models on one thousand
4-way MC problems constructed from the training
set and counted the frequency of the most likely
output token, presented in Figure 4.
From the figure, we observe that LLMs do
**understand multiple-choice format, but with a**
**heavy bias towards certain options, which may**
**be alleviated by alignment. For example, both**
BLOOM 7B and Pythia 6.9B only outputs B and C,
but never A and D for all the 1K problems, while the
output distribution is more balanced in BLOOM’s
aligned version BLOOMZ.
Another issue Figure 4 reveals is the options
tokenization. The currently most popular evaluation framework of MC problems provided by
Hendrycks et al. (2021a) directly tokenizes the options by calling tokenizer("A").input_ids[0].
However, this does not always yield the correct
token id, since some tokenizers treat “A” and “ A”
as different tokens. For example, in LLaMA 3 tokenizer, the id of token “A” is 32, produced by the
above script, while the id of token “ A” is 362, one
of the most likely tokens generated by the model
after an MC question. In our implementation, we
solve this issue by customizing the tokenization of
options for each model.
**3.4** **Rationality of MC Evaluation**
**3.4.1** **Correlation between MC Evaluation**
**and Open-Ended Evaluation**
From the experiments described in Section 3.1 and
3.2, we collected more than ten distractors for every problem in GSM8K’s test set. Using these
distractors, we constructed MC questions with different numbers of choices, ranging from 2-way to
8-way. We evaluated all the models mentioned in
Section 3.1 on these seven suites of MC problems,
and their performance is plotted against the performance on original GSM8K in Figure 5. The
**results are strongly correlated with statistical**
**significance in all cases.**
To further explore the relation between models’
performance on GSM8K and GSM-MC, we also
visualize the questions in both datasets using the
correctness of 40 models with non-trivial performance as features, as shown in Figure 6. In the
2-way setting, the MC questions are clearly structured into two clusters. This is explained by the
fact that between the options “A” and “B”, some
models are biased towards the former while others
are biased towards the latter (as shown in Figure 4),
which results in the features of questions with correct answer “A” and those with correct answer “B”
having distinct distributions. However, as can be
seen in Figure 6, as the number of options in**creases in the MC questions, this correctness**
**distribution gap is reduced, and the overall dis-**
**tribution of the MC questions also moves closer**
**to that of the generative questions.**
Based on these findings, we consider 4-way MC
problems by default in the rest of this work and
in our released datasets, as 4-way questions are
the most common MC format and our experiments
also suggest that 4-way GSM-MC yields a similar model performance distribution to the original
GSM8K. However, to contribute to future research,
we also release all the distractors used to construct
MC questions with more options.
**3.4.2** **Robustness against Distractors and**
**Choice Orders**
Many works studying LLMs’ performance on MC
questions have suggested that LLMs are not robust
to choice orders in MC problems (Robinson and
Wingate, 2023; Zheng et al., 2024; Wang et al.,
2024). Thus, to explore LLMs’ robustness on
GSM-MC, we constructed ten different sets of 4way MC problems from the distractor pool where
both the choice of the three distractors and the
order of the four options are randomized and repeated the previous experiments on these ten sets
of problems. The results are plotted in Figure 7,
where it can be observed that the variation of one
model’s performance is quite small compared with
the inter-model difference.
We also experimented with an alternative strategy for generating distractors, where for a question
with ground truth answer n, we randomly sample
-----
|D B C A|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
bloom-7b mistral-7b qwen1.5-7b
0 200 400 600 800 0 100 200 300 400 500 0 100 200 300 400
B C C
C B B
D A
A D
llama-2-7b pythia-6.9b phi-2
0 100 200 300 400 500 0 200 400 600 800 0 50 100 150 200 250 300 350
C C A
B B C
A B
D D
bloomz-7b mistral-7b-instruct qwen1.5-7b-chat
0 100 200 300 400 500 600 0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350
D D B
C B C
B C A
A A D
<|im_end|>
llama-2-7b-chat flan-t5-11b phi-3
0 100 200 300 400 0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350
C C B
B B A
D D C
A A D
Figure 4: Frequency of most likely output token over 1K training set problems on GSM-MC by base models (top)
and aligned models (bottom). The ground truth answers of the 1K problems are balanced across the four options.
in other cases (such as LaTeX expressions). Thus
we recommend using model-generated distractors
in future research.
**4** **MATH-MC and PythonIO**
Inspired by the success of converting GSM8K to
MC format, we also convert three other popular
LLM evaluation benchmarks - MATH, HumanEval,
and MBPP - into MC format to accelerate the evaluation of LLMs.
**MATH** For MATH, we generated distractors
with ChatGLM3, Qwen1.5, Gemma, Mistral, and
Phi-2. As the answers are all latex expressions in
this dataset, we used SymPy[2] to remove lexically
different but semantically equivalent answers. After collecting the distractors, we filtered out a small
number of questions where the ground truth answer
extracted from the original solution is ambiguous
(either empty or has more than one answer), which
leaves us with 7.3K training samples and 4.9K test
samples.
[2https://www.sympy.org/en/index.html](https://www.sympy.org/en/index.html)
Correlation with
Distractors Std
generation scores
model-generated 1.083 0.859
randomly sampled 1.017 0.705
Table 2: Comparision of model scores on GSM-MC
with model-generated and randomly sampled distractors:
standard deviation across ten sets of questions, and mean
correlation with the scores on the original GSM8K.
distractors in the interval [0.5n, 1.5n][1]. Like the
previous experiment, we constructed ten sets of
randomized questions and evaluated the models on
them. We find that in this setting, the models’ performance variation across the ten sets of problems
is about the same as model-generated distractors,
but the average correlation between scores on MC
questions and the scores on the original GSM8K is
much weaker, as shown in Table 2. Also, this strategy only applies to benchmarks where the ground
truth answers are straightforward numbers, but fails
1When n is less than 40, we sample in [n − 20, n + 20]
instead.
-----
|2-way (0.8756)|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|2-way (0.8756)||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
80 2-way (0.8756) 80 3-way (0.8711) 80 4-way (0.8474) 80 5-way (0.8740)
70 70 70 70
60 60 60 60
50 50 50 50
MC40 MC40 MC40 MC40
30 30 30 30
20 20 20 20
10 10 10 10
0 0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
Generation Generation Generation Generation
80 6-way (0.8636) 80 7-way (0.8116) 80 8-way (0.7989)
70 70 70
60 60 60
50 50 50
MC40 MC40 MC40
30 30 30
20 20 20
10 10 10
0 0 0
0 20 40 60 80 0 20 40 60 80 0 20 40 60 80
Generation Generation Generation
Figure 5: Model performance on GSM-MC (with the number of choices ranging from 2 to 8) and the original
GSM8K. Each point is one model’s score on GSM8K (x-axis) and one version of GSM-MC (y-axis), and the
best-fitting line is given in red. The MC scores are strongly correlated with generation scores (Pearson correlation
shown in each subplot’s title), with a p-value less than 0.001 in all cases, indicating statistical significance.
**HumanEval and MBPP** For the code generation
datasets, we follow Gu et al. (2024) and convert the
task into program output prediction instead. We
heuristically extracted and manually verified inputoutput pairs from the unit tests in HumanEval and
MBPP, and used Qwen1.5, Mistral, ChatGLM3,
LLaMA-3, Phi-3, Gemma, and StarCoder (Li et al.,
2023a) to generate distractors. We only retained
distractors that can be successfully evaluated by
a Python interpreter, and removed any duplicates.
For the train/test split, we use all programs from
HumanEval and the test set of MBPP as test samples, and the rest of MBPP as training samples.
The selected results of our evaluated models on
MATH-MC and PythonIO, along with GSM-MC,
are resented in Table 3 (the complete results are
given in Appendix A). Overall, LLaMA-3 70B Instruct is the best performing model among all the
evaluated models, scoring 61.1 on GSM-MC, 60.3
on MATH-MC, and 70.1 on PythonIO. Also, all
three benchmarks prove to be rather challenging
tasks, with few models scoring higher than 50, leaving much room for improvement.
**5** **Conclusion**
a new program reasoning benchmark PythonIO
from HumanEval and MBPP. Through extensive
experiments, we show that LLMs’ performance on
GSM-MC is strongly correlated with their performance on the original GSM8K using open-ended
generation, regardless of choice numbers and option orders. With the introduction of these three
benchmarks, we hope to facilitate more efficient
LLM evaluation in the research community.
**Limitations**
Due to limited computation resources, throughout
this work we used only GSM8K and GSM-MC
as a proof-of-concept example to discuss the relation between a short-answer generation benchmark
and its multiple-choice version. In terms of the
other two benchmarks, MATH includes three times
more questions than GSM8K, thus we expect most
of the conclusions regarding robustness that we
drew from experiments on GSM-MC to also hold
on MATH-MC. As for PythonIO, the newly constructed benchmark evaluates a different capability
(input-output reasoning) compared with the original HumanEval and MBPP (program synthesis),
and is thus not directly comparable.
Also, the methodology taken in this work
only applies to generation benchmarks with short,
unique ground truth answers, but not other open
In this work, we convert two of the most popular
LLM evaluation benchmarks - GSM8K and MATH
- into multiple-choice format, and also construct
-----
2-way 3-way 4-way 5-way
40 40
40
40 30
20 20
20
20 10
0
0 0 0
20 10
20 20 20
40 30
40 40 40
40 20 0 20 40 60 40 20 0 20 40 60 40 20 0 20 40 60 60 40 20 0 20 40 60
6-way 7-way 8-way
40
30 30
30
20 20
20
10 10
10
0 0 0 MC
10 10 10 generation
20
20 20
30
30 30
40
60 40 20 0 20 40 60 60 40 20 0 20 40 60 60 40 20 0 20 40 60
Figure 6: T-SNE visualization of questions in GSM8K and GSM-MC, using 40 models’ correctness on each question
as features. Starting from 4-way, the distribution of MC questions has a high overlap with generative questions.
ended generation tasks such as machine translation
and summarization. We leave the exploration of
whether these tasks can also be evaluated more efficiently in multiple-choice format to future works.
**Ethics Statement**
Regarding the data resources from which GSMMC, MAHT-MC, and PythonIO are constructed,
GSM8K, MATH, and HumanEval are released
under MIT license, and MBPP is released under
Apache 2.0 license. If you use, adapt, or redistribute our benchmarks, please also cite the original resources and include the corresponding license
information. Our benchmarks should not be used
outside of research contexts.
**References**
Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan,
Jyoti Aneja, Ahmed Awadallah, Hany Awadalla,
Nguyen Bach, Amit Bahree, Arash Bakhtiari,
Harkirat Behl, Alon Benhaim, Misha Bilenko, Johan Bjorck, Sébastien Bubeck, Martin Cai, Caio
César Teodoro Mendes, Weizhu Chen, Vishrav
Chaudhary, Parul Chopra, Allie Del Giorno, Gustavo
de Rosa, Matthew Dixon, Ronen Eldan, Dan Iter,
Amit Garg, Abhishek Goswami, Suriya Gunasekar,
Emman Haider, Junheng Hao, Russell J. Hewett,
Jamie Huynh, Mojan Javaheripi, Xin Jin, Piero Kauffmann, Nikos Karampatziakis, Dongwoo Kim, Mahoud Khademi, Lev Kurilenko, James R. Lee, Yin Tat
Lee, Yuanzhi Li, Chen Liang, Weishung Liu, Eric
Lin, Zeqi Lin, Piyush Madan, Arindam Mitra, Hardik
Modi, Anh Nguyen, Brandon Norick, Barun Patra,
Daniel Perez-Becker, Thomas Portet, Reid Pryzant,
Heyang Qin, Marko Radmilac, Corby Rosset, Sambudha Roy, Olatunji Ruwase, Olli Saarikivi, Amin
Saied, Adil Salim, Michael Santacroce, Shital Shah,
Ning Shang, Hiteshi Sharma, Xia Song, Masahiro
Tanaka, Xin Wang, Rachel Ward, Guanhua Wang,
Philipp Witte, Michael Wyatt, Can Xu, Jiahang Xu,
Sonali Yadav, Fan Yang, Ziyi Yang, Donghan Yu,
Chengruidong Zhang, Cyril Zhang, Jianwen Zhang,
Li Lyna Zhang, Yi Zhang, Yue Zhang, Yunan Zhang,
[and Xiren Zhou. 2024. Phi-3 technical report: A](https://doi.org/10.48550/ARXIV.2404.14219)
[highly capable language model locally on your phone.](https://doi.org/10.48550/ARXIV.2404.14219)
_CoRR, abs/2404.14219._
Jacob Austin, Augustus Odena, Maxwell I. Nye,
Maarten Bosma, Henryk Michalewski, David Dohan,
Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le,
[and Charles Sutton. 2021. Program synthesis with](http://arxiv.org/abs/2108.07732)
[large language models. CoRR, abs/2108.07732.](http://arxiv.org/abs/2108.07732)
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang,
Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei
Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin,
Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu,
Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren,
Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong
Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang
Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian
Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi
Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang,
Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. 2023.
[Qwen technical report. CoRR, abs/2309.16609.](https://doi.org/10.48550/ARXIV.2309.16609)
Stella Biderman, Hailey Schoelkopf, Quentin Gregory
Anthony, Herbie Bradley, Kyle O’Brien, Eric Hallahan, Mohammad Aflah Khan, Shivanshu Purohit,
-----
60
50
40
30
20
Qwen1.5-0.5BQwen1.5-1.8BQwen1.5-4BQwen1.5-7BQwen1.5-14BQwen1.5-32BQwen1.5-72BQwen1.5-0.5B-ChatQwen1.5-1.8B-ChatQwen1.5-4B-ChatQwen1.5-7B-ChatQwen1.5-14B-ChatQwen1.5-32B-ChatQwen1.5-72B-ChatMistral-7BMistral-7B-InstructLLaMA-2-7BLLaMA-2-13BLLaMA-2-70BLLaMA-2-7B-ChatLLaMA-2-13B-ChatLLaMA-2-70B-ChatLLaMA-3-8BLLaMA-3-70BLLaMA-3-8B-InstructLLaMA-3-70B-InstructGemma-2BGemma-7BGemma-2B-itGemma-7B-itPhi-1Phi-1.5Phi-2Phi-3 ChatGLM3-6B-BaseChatGLM3-6BPythia-70MPythia-160MPythia-410MPythia-1BPythia-1.4BPythia-2.8BPythia-6.9BPythia-12BFlan-T5-60MFlan-T5-220MFlan-T5-770MFlan-T5-3BFlan-T5-11BFlan-UL2BLOOM-0.56BBLOOM-1.1BBLOOM-1.7BBLOOM-3BBLOOM-7BBLOOMZ-0.56BBLOOMZ-1.1BBLOOMZ-1.7BBLOOMZ-3BBLOOMZ-7B
Figure 7: Performance variation on 4-way GSM-MC across ten sets of questions with different option orders and
distractors.
USVSN Sai Prashanth, Edward Raff, Aviya Skowron,
Lintang Sutawika, and Oskar van der Wal. 2023.
[Pythia: A suite for analyzing large language models](https://proceedings.mlr.press/v202/biderman23a.html)
[across training and scaling. In International Con-](https://proceedings.mlr.press/v202/biderman23a.html)
_ference on Machine Learning, ICML 2023, 23-29_
_July 2023, Honolulu, Hawaii, USA, volume 202 of_
_Proceedings of Machine Learning Research, pages_
2397–2430. PMLR.
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
[2020. Language models are few-shot learners. In Ad-](https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html)
_vances in Neural Information Processing Systems 33:_
_Annual Conference on Neural Information Process-_
_ing Systems 2020, NeurIPS 2020, December 6-12,_
_2020, virtual._
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan,
Henrique Pondé de Oliveira Pinto, Jared Kaplan,
Harrison Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, Alex Ray, Raul Puri, Gretchen
Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray,
Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz
Kaiser, Mohammad Bavarian, Clemens Winter,
Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen
Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie
Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain,
William Saunders, Christopher Hesse, Andrew N.
Carr, Jan Leike, Joshua Achiam, Vedant Misra, Evan
Morikawa, Alec Radford, Matthew Knight, Miles
Brundage, Mira Murati, Katie Mayer, Peter Welinder,
Bob McGrew, Dario Amodei, Sam McCandlish, Ilya
[Sutskever, and Wojciech Zaremba. 2021. Evaluat-](http://arxiv.org/abs/2107.03374)
[ing large language models trained on code. CoRR,](http://arxiv.org/abs/2107.03374)
abs/2107.03374.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret
Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang,
Mostafa Dehghani, Siddhartha Brahma, Albert Webson, Shixiang Shane Gu, Zhuyun Dai, Mirac Suzgun, Xinyun Chen, Aakanksha Chowdhery, Sharan
Narang, Gaurav Mishra, Adams Yu, Vincent Y. Zhao,
Yanping Huang, Andrew M. Dai, Hongkun Yu, Slav
Petrov, Ed H. Chi, Jeff Dean, Jacob Devlin, Adam
Roberts, Denny Zhou, Quoc V. Le, and Jason Wei.
[2022. Scaling instruction-finetuned language models.](https://doi.org/10.48550/ARXIV.2210.11416)
_CoRR, abs/2210.11416._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems. CoRR, abs/2110.14168.](http://arxiv.org/abs/2110.14168)
Alex Gu, Baptiste Rozière, Hugh Leather, Armando
Solar-Lezama, Gabriel Synnaeve, and Sida I. Wang.
2024. Cruxeval: [A benchmark for code rea-](https://doi.org/10.48550/ARXIV.2401.03065)
[soning, understanding and execution.](https://doi.org/10.48550/ARXIV.2401.03065) _CoRR,_
abs/2401.03065.
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio
César Teodoro Mendes, Allie Del Giorno, Sivakanth
Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo
-----
Model GSM-MC MATH-MC PythonIO Average
Qwen1.5-7B 38.43 1.43 42.96 32.78 38.06
_±_
Qwen1.5-14B 45.40 0.92 50.65 40.86 45.64
_±_
Qwen1.5-32B 50.83 1.10 54.48 51.78 52.36
_±_
Qwen1.5-72B 53.28 0.89 55.92 50.36 53.19
_±_
Qwen1.5-7B-Chat 37.85 1.26 43.85 32.48 38.06
_±_
Qwen1.5-14B-Chat 46.46 0.99 49.98 40.86 45.77
_±_
Qwen1.5-32B-Chat 51.92 1.01 55.13 48.57 51.87
_±_
Qwen1.5-72B-Chat 52.30 1.24 56.33 50.65 53.09
_±_
Mistral-7B 31.74 1.09 34.11 31.65 32.50
_±_
Mistral-7B-Instruct 31.00 0.79 28.27 25.89 28.39
_±_
LLaMA-2-13B 31.48 1.27 30.12 26.60 29.40
_±_
LLaMA-2-70B 41.92 1.22 40.64 38.24 40.27
_±_
LLaMA-2-13B-Chat 29.77 0.79 28.94 28.03 28.91
_±_
LLaMA-2-70B-Chat 34.14 1.27 32.36 31.47 32.66
_±_
LLaMA-3-8B 33.52 1.15 37.63 34.38 35.18
_±_
LLaMA-3-70B 49.58 1.00 53.99 59.92 54.50
_±_
LLaMA-3-8B-Instruct 36.10 1.07 37.61 38.95 37.55
_±_
LLaMA-3-70B-Instruct 61.14 1.37 60.26 70.07 63.82
_±_
Gemma-7B 37.33 1.04 38.36 30.52 35.40
_±_
Gemma-7B-it 32.62 0.90 33.52 27.97 31.37
_±_
Phi-2 30.98 1.08 30.44 29.75 30.39
_±_
Phi-3 39.26 1.45 41.39 38.24 39.63
_±_
ChatGLM3-6B-Base 35.32 1.13 37.32 27.73 33.46
_±_
ChatGLM3-6B 29.69 1.43 31.42 26.13 29.08
_±_
Flan-T5-3B 25.03 0.87 24.93 26.60 25.52
_±_
Flan-T5-11B 32.80 1.63 26.43 29.33 29.52
_±_
Flan-UL2 31.54 0.97 27.35 25.95 28.28
_±_
Table 3: Selected results on GSM-MC, MATH-MC, and PythonIO. The results for GSM-MC are the mean value of
the ten sets of different problems in Figure 7, with standard deviation given in subscript.
de Rosa, Olli Saarikivi, Adil Salim, Shital Shah,
Harkirat Singh Behl, Xin Wang, Sébastien Bubeck,
Ronen Eldan, Adam Tauman Kalai, Yin Tat Lee, and
[Yuanzhi Li. 2023. Textbooks are all you need. CoRR,](https://doi.org/10.48550/ARXIV.2306.11644)
abs/2306.11644.
Dan Hendrycks, Collin Burns, Steven Basart, Andy
Zou, Mantas Mazeika, Dawn Song, and Jacob Stein[hardt. 2021a. Measuring massive multitask language](https://openreview.net/forum?id=d7KBjmI3GmQ)
[understanding. In 9th International Conference on](https://openreview.net/forum?id=d7KBjmI3GmQ)
_Learning Representations, ICLR 2021, Virtual Event,_
_Austria, May 3-7, 2021. OpenReview.net._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
[Jacob Steinhardt. 2021b. Measuring mathematical](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
[problem solving with the MATH dataset. In Pro-](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
_ceedings of the Neural Information Processing Sys-_
_tems Track on Datasets and Benchmarks 1, NeurIPS_
_Datasets and Benchmarks 2021, December 2021, vir-_
_tual._
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de Las Casas, Florian Bressand, Gianna Lengyel,
Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud, Marie-Anne Lachaux, Pierre Stock,
Teven Le Scao, Thibaut Lavril, Thomas Wang, Timo[thée Lacroix, and William El Sayed. 2023. Mistral](https://doi.org/10.48550/ARXIV.2310.06825)
[7b. CoRR, abs/2310.06825.](https://doi.org/10.48550/ARXIV.2310.06825)
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas
Muennighoff, Denis Kocetkov, Chenghao Mou,
Marc Marone, Christopher Akiki, Jia Li, Jenny
Chim, Qian Liu, Evgenii Zheltonozhskii, Terry Yue
Zhuo, Thomas Wang, Olivier Dehaene, Mishig
Davaadorj, Joel Lamy-Poirier, João Monteiro, Oleh
Shliazhko, Nicolas Gontier, Nicholas Meade, Armel
Zebaze, Ming-Ho Yee, Logesh Kumar Umapathi,
Jian Zhu, Benjamin Lipkin, Muhtasham Oblokulov,
Zhiruo Wang, Rudra Murthy V, Jason Stillerman,
Siva Sankalp Patel, Dmitry Abulkhanov, Marco
Zocca, Manan Dey, Zhihan Zhang, Nour MoustafaFahmy, Urvashi Bhattacharyya, Wenhao Yu, Swayam
Singh, Sasha Luccioni, Paulo Villegas, Maxim Kunakov, Fedor Zhdanov, Manuel Romero, Tony Lee,
Nadav Timor, Jennifer Ding, Claire Schlesinger, Hailey Schoelkopf, Jan Ebert, Tri Dao, Mayank Mishra,
Alex Gu, Jennifer Robinson, Carolyn Jane Anderson, Brendan Dolan-Gavitt, Danish Contractor, Siva
-----
Reddy, Daniel Fried, Dzmitry Bahdanau, Yacine Jernite, Carlos Muñoz Ferrandis, Sean Hughes, Thomas
Wolf, Arjun Guha, Leandro von Werra, and Harm
[de Vries. 2023a. Starcoder: may the source be with](https://doi.org/10.48550/ARXIV.2305.06161)
[you! CoRR, abs/2305.06161.](https://doi.org/10.48550/ARXIV.2305.06161)
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del
Giorno, Suriya Gunasekar, and Yin Tat Lee. 2023b.
[Textbooks are all you need II: phi-1.5 technical report.](https://doi.org/10.48550/ARXIV.2309.05463)
_CoRR, abs/2309.05463._
[Chin-Yew Lin. 2004. ROUGE: A package for auto-](https://aclanthology.org/W04-1013)
[matic evaluation of summaries. In Text Summariza-](https://aclanthology.org/W04-1013)
_tion Branches Out, pages 74–81, Barcelona, Spain._
Association for Computational Linguistics.
Thomas Mesnard, Cassidy Hardin, Robert Dadashi,
Surya Bhupatiraju, Shreya Pathak, Laurent Sifre,
Morgane Rivière, Mihir Sanjay Kale, Juliette Love,
Pouya Tafti, Léonard Hussenot, Aakanksha Chowdhery, Adam Roberts, Aditya Barua, Alex Botev, Alex
Castro-Ros, Ambrose Slone, Amélie Héliou, Andrea
Tacchetti, Anna Bulanova, Antonia Paterson, Beth
Tsai, Bobak Shahriari, Charline Le Lan, Christopher A. Choquette-Choo, Clément Crepy, Daniel Cer,
Daphne Ippolito, David Reid, Elena Buchatskaya,
Eric Ni, Eric Noland, Geng Yan, George Tucker,
George-Christian Muraru, Grigory Rozhdestvenskiy,
Henryk Michalewski, Ian Tenney, Ivan Grishchenko,
Jacob Austin, James Keeling, Jane Labanowski,
Jean-Baptiste Lespiau, Jeff Stanway, Jenny Brennan,
Jeremy Chen, Johan Ferret, Justin Chiu, and et al.
[2024. Gemma: Open models based on gemini re-](https://doi.org/10.48550/ARXIV.2403.08295)
[search and technology. CoRR, abs/2403.08295.](https://doi.org/10.48550/ARXIV.2403.08295)
Niklas Muennighoff, Thomas Wang, Lintang Sutawika,
Adam Roberts, Stella Biderman, Teven Le Scao,
M. Saiful Bari, Sheng Shen, Zheng Xin Yong, Hailey Schoelkopf, Xiangru Tang, Dragomir Radev,
Alham Fikri Aji, Khalid Almubarak, Samuel Albanie, Zaid Alyafeai, Albert Webson, Edward Raff,
and Colin Raffel. 2023. [Crosslingual generaliza-](https://doi.org/10.18653/V1/2023.ACL-LONG.891)
[tion through multitask finetuning. In Proceedings](https://doi.org/10.18653/V1/2023.ACL-LONG.891)
_of the 61st Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
_ACL 2023, Toronto, Canada, July 9-14, 2023, pages_
15991–16111. Association for Computational Linguistics.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei[Jing Zhu. 2002. Bleu: a method for automatic evalu-](https://doi.org/10.3115/1073083.1073135)
[ation of machine translation. In Proceedings of the](https://doi.org/10.3115/1073083.1073135)
_40th Annual Meeting of the Association for Compu-_
_tational Linguistics, July 6-12, 2002, Philadelphia,_
_PA, USA, pages 311–318. ACL._
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
[Wei Li, and Peter J. Liu. 2020. Exploring the limits](http://jmlr.org/papers/v21/20-074.html)
[of transfer learning with a unified text-to-text trans-](http://jmlr.org/papers/v21/20-074.html)
[former. J. Mach. Learn. Res., 21:140:1–140:67.](http://jmlr.org/papers/v21/20-074.html)
[Joshua Robinson and David Wingate. 2023. Leveraging](https://openreview.net/pdf?id=yKbprarjc5B)
[large language models for multiple choice question](https://openreview.net/pdf?id=yKbprarjc5B)
[answering. In The Eleventh International Confer-](https://openreview.net/pdf?id=yKbprarjc5B)
_ence on Learning Representations, ICLR 2023, Ki-_
_gali, Rwanda, May 1-5, 2023. OpenReview.net._
Jaromír Savelka, Arav Agarwal, Christopher Bogart,
[and Majd Sakr. 2023. Large language models (GPT)](https://doi.org/10.5220/0011996900003470)
[struggle to answer multiple-choice questions about](https://doi.org/10.5220/0011996900003470)
[code. In Proceedings of the 15th International Con-](https://doi.org/10.5220/0011996900003470)
_ference on Computer Supported Education, CSEDU_
_2023, Prague, Czech Republic, April 21-23, 2023,_
_Volume 2, pages 47–58. SCITEPRESS._
Teven Le Scao, Angela Fan, Christopher Akiki, Ellie Pavlick, Suzana Ilic, Daniel Hesslow, Roman
Castagné, Alexandra Sasha Luccioni, François Yvon,
Matthias Gallé, Jonathan Tow, Alexander M. Rush,
Stella Biderman, Albert Webson, Pawan Sasanka Ammanamanchi, Thomas Wang, Benoît Sagot, Niklas
Muennighoff, Albert Villanova del Moral, Olatunji
Ruwase, Rachel Bawden, Stas Bekman, Angelina
McMillan-Major, Iz Beltagy, Huu Nguyen, Lucile
Saulnier, Samson Tan, Pedro Ortiz Suarez, Victor Sanh, Hugo Laurençon, Yacine Jernite, Julien
Launay, Margaret Mitchell, Colin Raffel, Aaron
Gokaslan, Adi Simhi, Aitor Soroa, Alham Fikri
Aji, Amit Alfassy, Anna Rogers, Ariel Kreisberg
Nitzav, Canwen Xu, Chenghao Mou, Chris Emezue,
Christopher Klamm, Colin Leong, Daniel van Strien,
[David Ifeoluwa Adelani, and et al. 2022. BLOOM:](https://doi.org/10.48550/ARXIV.2211.05100)
[A 176b-parameter open-access multilingual language](https://doi.org/10.48550/ARXIV.2211.05100)
[model. CoRR, abs/2211.05100.](https://doi.org/10.48550/ARXIV.2211.05100)
Thibault Sellam, Dipanjan Das, and Ankur P. Parikh.
2020. [BLEURT: learning robust metrics for text](https://doi.org/10.18653/V1/2020.ACL-MAIN.704)
[generation. In Proceedings of the 58th Annual Meet-](https://doi.org/10.18653/V1/2020.ACL-MAIN.704)
_ing of the Association for Computational Linguistics,_
_ACL 2020, Online, July 5-10, 2020, pages 7881–7892._
Association for Computational Linguistics.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian CantonFerrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurélien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023. Llama 2: Open foundation and fine-](https://doi.org/10.48550/ARXIV.2307.09288)
[tuned chat models. CoRR, abs/2307.09288.](https://doi.org/10.48550/ARXIV.2307.09288)
Haochun Wang, Sendong Zhao, Zewen Qiang, Bing Qin,
[and Ting Liu. 2024. Beyond the answers: Reviewing](https://doi.org/10.48550/ARXIV.2402.01349)
-----
[the rationality of multiple choice question answering](https://doi.org/10.48550/ARXIV.2402.01349)
[for the evaluation of large language models. CoRR,](https://doi.org/10.48550/ARXIV.2402.01349)
abs/2402.01349.
Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang,
Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu,
Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma,
Yufei Xue, Jidong Zhai, Wenguang Chen, Zhiyuan
Liu, Peng Zhang, Yuxiao Dong, and Jie Tang. 2023.
[GLM-130B: an open bilingual pre-trained model. In](https://openreview.net/pdf?id=-Aw0rrrPUF)
_The Eleventh International Conference on Learning_
_Representations, ICLR 2023, Kigali, Rwanda, May_
_1-5, 2023. OpenReview.net._
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q.
[Weinberger, and Yoav Artzi. 2020. Bertscore: Evalu-](https://openreview.net/forum?id=SkeHuCVFDr)
[ating text generation with BERT. In 8th International](https://openreview.net/forum?id=SkeHuCVFDr)
_Conference on Learning Representations, ICLR 2020,_
_Addis Ababa, Ethiopia, April 26-30, 2020. OpenRe-_
view.net.
Ziyin Zhang, Chaoyu Chen, Bingchang Liu, Cong Liao,
Zi Gong, Hang Yu, Jianguo Li, and Rui Wang. 2023.
[Unifying the perspectives of nlp and software en-](https://doi.org/10.48550/ARXIV.2311.07989)
[gineering: A survey on language models for code.](https://doi.org/10.48550/ARXIV.2311.07989)
_CoRR, abs/2311.07989._
Chujie Zheng, Hao Zhou, Fandong Meng, Jie Zhou, and
[Minlie Huang. 2024. Large language models are not](https://openreview.net/forum?id=shr9PXz7T0)
[robust multiple choice selectors. In The Twelfth Inter-](https://openreview.net/forum?id=shr9PXz7T0)
_national Conference on Learning Representations._
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang,
[Joseph E. Gonzalez, and Ion Stoica. 2023. Judging](http://papers.nips.cc/paper_files/paper/2023/hash/91f18a1287b398d378ef22505bf41832-Abstract-Datasets_and_Benchmarks.html)
[llm-as-a-judge with mt-bench and chatbot arena. In](http://papers.nips.cc/paper_files/paper/2023/hash/91f18a1287b398d378ef22505bf41832-Abstract-Datasets_and_Benchmarks.html)
_Advances in Neural Information Processing Systems_
_36: Annual Conference on Neural Information Pro-_
_cessing Systems 2023, NeurIPS 2023, New Orleans,_
_LA, USA, December 10 - 16, 2023._
-----
**A** **Complete Results**
Model GSM-MC MATH-MC PythonIO Average
Qwen1.5-0.5B 27.81 1.21 25.11 25.18 26.03
_±_
Qwen1.5-1.8B 28.46 0.80 28.90 26.43 27.93
_±_
Qwen1.5-4B 34.75 1.15 37.81 25.71 32.76
_±_
Qwen1.5-7B 38.43 1.43 42.96 32.78 38.06
_±_
Qwen1.5-14B 45.40 0.92 50.65 40.86 45.64
_±_
Qwen1.5-32B 50.83 1.10 54.48 51.78 52.36
_±_
Qwen1.5-72B 53.28 0.89 55.92 50.36 53.19
_±_
Qwen1.5-0.5B-Chat 26.75 1.07 24.28 28.03 26.35
_±_
Qwen1.5-1.8B-Chat 28.08 1.08 28.35 26.31 27.58
_±_
Qwen1.5-4B-Chat 32.68 1.01 36.35 25.65 31.56
_±_
Qwen1.5-7B-Chat 37.85 1.26 43.85 32.48 38.06
_±_
Qwen1.5-14B-Chat 46.46 0.99 49.98 40.86 45.77
_±_
Qwen1.5-32B-Chat 51.92 1.01 55.13 48.57 51.87
_±_
Qwen1.5-72B-Chat 52.30 1.24 56.33 50.65 53.09
_±_
Mistral-7B 31.74 1.09 34.11 31.65 32.50
_±_
Mistral-7B-Instruct 31.00 0.79 28.27 25.89 28.39
_±_
LLaMA-2-7B 27.48 0.92 29.08 23.04 26.53
_±_
LLaMA-2-13B 31.48 1.27 30.12 26.60 29.40
_±_
LLaMA-2-70B 41.92 1.22 40.64 38.24 40.27
_±_
LLaMA-2-7B-Chat 26.27 1.25 26.48 25.53 26.09
_±_
LLaMA-2-13B-Chat 29.77 0.79 28.94 28.03 28.91
_±_
LLaMA-2-70B-Chat 34.14 1.27 32.36 31.47 32.66
_±_
LLaMA-3-8B 33.52 1.15 37.63 34.38 35.18
_±_
LLaMA-3-70B 49.58 1.00 53.99 59.92 54.50
_±_
LLaMA-3-8B-Instruct 36.10 1.07 37.61 38.95 37.55
_±_
LLaMA-3-70B-Instruct 61.14 1.37 60.26 70.07 63.82
_±_
Gemma-2B 26.50 1.13 26.31 24.29 25.70
_±_
Gemma-7B 37.33 1.04 38.36 30.52 35.40
_±_
Gemma-2B-it 24.82 1.10 24.99 24.64 24.82
_±_
Gemma-7B-it 32.62 0.90 33.52 27.97 31.37
_±_
Phi-1 25.46 0.91 25.15 26.19 25.60
_±_
Phi-1.5 27.09 1.24 26.62 22.80 25.50
_±_
Phi-2 30.98 1.08 30.44 29.75 30.39
_±_
Phi-3 39.26 1.45 41.39 38.24 39.63
_±_
ChatGLM3-6B-Base 35.32 1.13 37.32 27.73 33.46
_±_
ChatGLM3-6B 29.69 1.43 31.42 26.13 29.08
_±_
Table 4: The complete results on GSM-MC, MATH-MC, and PythonIO (continued in Table 5). The results for
GSM-MC are the mean value of the ten sets of different problems in Figure 7, with standard deviation given in
subscript.
-----
Model GSM-MC MATH-MC PythonIO Average
Pythia-70M 25.45 1.03 26.21 27.08 26.25
_±_
Pythia-160M 25.05 1.32 24.20 25.12 24.79
_±_
Pythia-410M 24.19 0.88 24.93 23.69 24.27
_±_
Pythia-1B 24.64 1.34 23.89 22.98 23.84
_±_
Pythia-1.4B 25.05 0.88 24.60 23.28 24.31
_±_
Pythia-2.8B 24.63 1.07 24.01 26.19 24.94
_±_
Pythia-6.9B 24.97 0.92 23.54 25.12 24.54
_±_
Pythia-12B 24.93 1.01 24.95 25.95 25.28
_±_
Flan-T5-60M 16.95 1.05 19.60 25.65 20.73
_±_
Flan-T5-220M 22.79 0.99 22.59 24.05 23.14
_±_
Flan-T5-770M 24.93 0.83 22.18 27.20 24.77
_±_
Flan-T5-3B 25.03 0.87 24.93 26.60 25.52
_±_
Flan-T5-11B 32.80 1.63 26.43 29.33 29.52
_±_
Flan-UL2 31.54 0.97 27.35 25.95 28.28
_±_
BLOOM-0.56B 24.73 0.86 23.97 25.42 24.71
_±_
BLOOM-1.1B 25.50 1.16 24.97 24.17 24.88
_±_
BLOOM-1.7B 25.79 0.98 25.23 23.16 24.73
_±_
BLOOM-3B 25.11 1.01 25.17 24.35 24.88
_±_
BLOOM-7B 25.04 1.19 25.03 24.23 24.77
_±_
BLOOMZ-0.56B 25.05 0.70 24.97 25.30 25.11
_±_
BLOOMZ-1.1B 24.76 1.14 23.22 24.64 24.21
_±_
BLOOMZ-1.7B 25.48 1.27 24.77 24.94 25.06
_±_
BLOOMZ-3B 25.71 1.31 24.18 24.29 24.73
_±_
BLOOMZ-7B 27.94 0.83 25.34 22.62 25.30
_±_
Table 5: The complete results on GSM-MC, MATH-MC, and PythonIO (continued from Table 4). The results for
GSM-MC are the mean value of the ten sets of different problems in Figure 7, with standard deviation given in
subscript.
-----
**B** **Prompt Details and Sample Outputs**
The following are grade school math word problems. Please answer the last problem following the
format of the previous examples. Highlight the final answer with ####.
Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many
clips did Natalia sell altogether in April and May?
Natalia sold 48/2 = <<48/2=24>>24 clips in May.
Natalia sold 48+24 = <<48+24=72>>72 clips altogether in April and May.
#### 72
{four more in−context examples}
Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her
friends every day with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck
egg. How much in dollars does she make every day at the farmers’ market?
Figure 8: Prompt format for evaluating LLMs on the original GSM8K dataset. In-context examples come from
training set.
-----
**Pythia 160M**
A $2.00 per day, a $1.00 a day, a $0.00 a day, a $0.00 a day, a $0.00 a day, a $0.00 a day, a $0.00 a day,
a $0.00 a day, a $0.00 a day, a $0.00 a day, a $0.00 a day, a $0.00 a day, a $0.00 a day, a $0.00 a day, a
$0.00 a day, a $0.00 a day, a $0.00 a day, a $0.00 a day, a $0.00 a day, ...
**BLOOM 1.7B**
How much does she make per day at the farmers’ market?
Janet eats three eggs for breakfast every morning and bakes four muffins for her friends every day. She
sells the remaining at the farmers’ market daily for $2 per fresh duck egg. How much in dollars does
she make every day at the farmers’ market? How much does she make per day at the farmers’ market?
Janet eats three eggs for breakfast every morning and bakes four muffins for her friends every day. She
sells the remaining at the farmers’ market daily for $2 per fresh duck egg. How much in dollars does
she make every day at the farmers’ market? How much does she make per day at the farmers’ market?
...
**Flan−T5 770M**
Janet eats 3 eggs for breakfast and 4 for muffins every day for a total of 3 + 4 = 7 eggs. She sells the
remaining 16 eggs at the farmers’ market for $2 per egg for a total of 16 −7 = $8 per day. The final
answer: 8.
**Qwen1.5−Chat 14B**
Let’s program in Python to solve it.
# define the variables
eggs_per_day = 16
breakfast_eggs = 3
muffin_eggs = 4
price_per_egg = 2
# calculate the eggs left after breakfast and baking
eggs_left = eggs_per_day −breakfast_eggs −muffin_eggs
# calculate the daily earnings
daily_earnings = eggs_left * price_per_egg
# print the result
print(daily_earnings)
Figure 9: Typical invalid responses to the question in Figure 8: repetition at word-level (Pythia) and sentence-level
(BLOOM), not following the answer format (Flan-T5), and writing programs instead of solving the problem directly
(Qwen).
-----
The following are multiple choice questions (with answers) about grade school math.
Natalia sold clips to 48 of her friends in April, and then she sold half as many clips in May. How many
clips did Natalia sell altogether in April and May?
A. 30040
B. 84
C. 72
D. 96
Answer: C
{four more in−context examples}
Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her
friends every day with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck
egg. How much in dollars does she make every day at the farmers’ market?
A. 22
B. 64
C. 18
D. 12
Answer:
Figure 10: Prompt format for evaluating LLMs on GSM8K-MC.
The following are multiple choice questions (with answers) about high school math.
A board game spinner is divided into three parts labeled $A$, $B$ and $C$. The probability of the
spinner landing on $A$ is $\frac{1}{3}$ and the probability of the spinner landing on $B$ is $\frac
{5}{12}$. What is the probability of the spinner landing on $C$? Express your answer as a common
fraction.
A. \frac{1}{12}
B. \dfrac{1−\frac{5}{12}}{12}
C. \frac{1}{4}
D. \frac{1}{1.67}
Answer: C
{four more in−context examples}
We roll a fair 6−sided die 5 times. What is the probability that we get a 6 in at most 2 of the rolls?
A. \dfrac{50}{1296}
B. \frac{1}{4}
C. \frac{625}{648}
D. 1
Answer:
Figure 11: Prompt for evaluating LLMs on MATH-MC.
-----
The following are multiple choice questions (with answers) about Python program reasoning.
Program:
R = 3
C = 3
def min_cost(cost, m, n):
tc = [[0 for x in range(C)] for x in range(R)]
tc[0][0] = cost[0][0]
for i in range(1, m+1):
tc[i][0] = tc[i−1][0] + cost[i][0]
for j in range(1, n+1):
tc[0][j] = tc[0][j−1] + cost[0][j]
for i in range(1, m+1):
for j in range(1, n+1):
tc[i][j] = min(tc[i−1][j−1], tc[i−1][j], tc[i][j−1]) + cost[i][j]
return tc[m][n]
Input:
min_cost([[1, 2, 3], [4, 8, 2], [1, 5, 3]], 2, 2)
Output:
A. 8
B. 10
C. 12
D. 6
Answer: A
{four more in−context examples}
Program:
def remove_Occ(s,ch):
for i in range(len(s)):
if (s[i] == ch):
s = s[0 : i] + s[i + 1:]
break
for i in range(len(s) −1,−1,−1):
if (s[i] == ch):
s = s[0 : i] + s[i + 1:]
break
return s
Input:
remove_Occ("hello","l")
Output:
A. "hell"
B. "heo"
C. "helo"
D. "hello"
Answer:
Figure 12: Prompt for evaluating LLMs on PythonIO.
-----
| [
"Ziyin, Zhang",
"Zhaokun, Jiang",
"Rui, Wang",
"Lizhen, Xu",
"Hongkun, Hao"
] | 2024-06-26T00:00:00 | null | false | 1 | 0 | null | http://arxiv.org/abs/2405.11966 | https://arxiv.org/abs/2405.11966 | https://www.semanticscholar.org/paper/9b73f37d89fa1484609909c335a50203fa094a0a |
Program-Aided Reasoners (Better) Know What They Know | Prior work shows that program-aided reasoning, in which large language models (LLMs) are combined with programs written in programming languages such as Python, can significantly improve accuracy on various reasoning tasks. However, while accuracy is essential, it is also important for such reasoners to “know what they know”, which can be quantified through the calibration of the model. In this paper, we compare the calibration of Program Aided Language Models (PAL) and text-based Chain-of-thought (COT) prompting techniques over 5 datasets and 2 model types - LLaMA models and OpenAI models. Our results indicate that PAL leads to improved calibration in 75% of the instances. Our analysis uncovers that prompting styles that produce lesser diversity in generations also have more calibrated results, and thus we also experiment with inducing lower generation diversity using temperature scaling and find that for certain temperatures, PAL is not only more accurate but is also more calibrated than COT. Overall, we demonstrate that, in the majority of cases, program-aided reasoners better know what they know than text-based counterparts. | This paper compares the calibration of Program Aided Language Models (PAL) and text-based Chain-of-thought (COT) prompting techniques over 5 datasets and 2 model types - LLaMA models and OpenAI models and demonstrates that, in the majority of cases, program-aided reasoners better know what they know than text-based counterparts. | # Program-Aided Reasoners (Better) Know What They Know
**Anonymous ACL submission**
**001** Prior work shows that program-aided reason- **Chain of Thought (CoT)** **Program Aided LM (PaL)**
**002** ing, in which large language models (LLMs) Since the electricity bill is $100 each
**003** are combined with programs written in pro- month, we multiply this amount by 12
**004** gramming languages such as Python, can sig- per year:
**005** nificantly improve accuracy on various reason- $100/month * 12 months/year = $1200
**006** ing tasks. However, while accuracy is essen- of roommates to find how much each
**007** tial, it is also important for such reasoners to roommate will pay:$1200/4 roommates = $1200
**008** “know what they know”, which can be quan- **The answer is $1200** **Interpreter**
**009** tified through the calibration of the model.
**010** In this paper, we compare the calibration of
**011** Program Aided Language Models (PAL) and
Q : 4 people share a room. Each month the electricity bill is $100.
_How much will each roommate pay per year for electricity, if they_
_divide the share equally?_
**Chain of Thought (CoT)**
**Program Aided LM (PaL)**
Since the electricity bill is $100 each
month, we multiply this amount by 12
(months in a year) to find the total cost
per year:
$100/month * 12 months/year = $1200
We divide the total cost by the number
of roommates to find how much each
roommate will pay:
$1200/4 roommates = $1200
**The answer is $1200**
**Interpreter**
**1200** **300**
**012** text-based Chain-of-thought (COT) prompting
Figure 1: Comparisons of COT and PAL outputs. COT
**013** techniques over 5 datasets and 2 model types
can sometimes generate the correct reasoning chain but
**014** - LLaMA models and OpenAI models. Our
fail to derive the correct answer as a final step. PAL
**015** results indicate that PAL leads to improved cal
fixes this issue by executing generated code to arrive at
**016** ibration in 75% of the instances. Our analysis
a deterministic answer.
**017** uncovers that prompting styles that produce
**018** lesser diversity in generations also have more
**019** calibrated results, and thus we also experiment
**020** with inducing lower generation diversity using language; program-aided language models (PAL); **040**
**021** temperature scaling and find that for certain Gao et al. (2022) have demonstrated the efficacy of **041**
**022** temperatures, PAL is not only more accurate using code (such as Python programs) as a means **042**
**023** but is also more calibrated than COT. Overall,
of improving the model’s reasoning, surpassing the **043**
**024** we demonstrate that, in the majority of cases,
accuracy of conventional chain-of-thought style **044**
**025** program-aided reasoners better know what they
prompts in some tasks (Madaan et al., 2022; Lyu **045**
**026** know than text-based counterparts.[1]
et al., 2023; Zhang et al., 2023a,b). Both methods **046**
**027** **1** **Introduction** are illustrated in Figure 1. **047**
Currently, most works proposing such methods **048**
**028** As language models (LMs) grow in size and ca- have been primarily focused on improving accu- **049**
**029** pabilities, several works examine methods to im- racy. However, for real-world applications, another **050**
**030** proving their reasoning skills with different styles highly desirable feature of ML systems is that they **051**
**031** of prompting (Wei et al., 2022; Wang et al., 2022; should be able to provide reliable confidence esti- **052**
**032** Suzgun et al., 2022b; Zhou et al., 2022; Yao et al., _mates. Accurate estimates of model confidence_ **053**
**033** 2023). One representative method, chain of thought are helpful for many applications, including al- **054**
**034** (COT) reasoning (Wei et al., 2022), takes inspira- lowing the model to refrain from providing an **055**
**035** tion from how humans approach problem-solving answer when uncertain, asking for human inter- **056**
**036** – by breaking down the problem into a sequence vention in uncertain cases, or providing confidence **057**
**037** of natural language explanations before arriving estimates to a downstream model that consumes **058**
**038** at a final answer. Furthermore, prompts that en- the outputs. The reliability is measured through **059**
**039** able problem-solving are not limited to natural _calibration, how a model’s confidence in its predic-_ **060**
[1Anonymized code and data are available at https://](https://anonymous.4open.science/r/code-calibrates-A61D/) tions aligns accurately with actual outcomes (Guo **061**
[anonymous.4open.science/r/code-calibrates-A61D/.](https://anonymous.4open.science/r/code-calibrates-A61D/) et al., 2017a; Jiang et al., 2020; Zhao et al., 2021). **062**
2262
-----
|Interpreter|Col2|
|---|---|
A1 5/10 0.5
GSM8K **CoT** A2 2/10 0.2
Arithmetic Reasoning Breaking down problem into a
sequence of natural language
steps before arriving at answer. An 1/10 0.1
Date Understanding
Symbolic Reasoning A1 0.2
2/10
## PaL
Breaking down problem into
Object Counting Python code before executing A2 4/10 0.4
it to arrive at the final Interpreter
Algorithmic Reasoning answer. An 0.2
2/10
Figure 2: Illustration of eliciting model confidence through self-consistency
**063** In sum, the previous research has shown, as elo
**064** quently stated by Kadavath et al. (2022) “language
**065** models (mostly) know what they know” — LLMs
**066** are reasonably well calibrated, although some im
**067** perfections remain.
**068** In this work, we examine the effect of program
**069** aided reasoning on calibration. We consider five
**070** datasets that cover different reasoning tasks and
**071** evaluate the performance of both PAL and COT
**072** style prompting for OpenAI models (OpenAI,
**073** 2023) and LLaMA models (Touvron et al., 2023)
**074** with respect to accuracy and calibration. We pri
**075** marily explore three main research questions :
the similarity of the generated chains-of-thoughts **098**
or programs and calibration, which might help ex- **099**
[plain these trends. Code and data available here](https://anonymous.4open.science/r/code-calibrates-A61D) **100**
under the Apache 2.0 license. **101**
**2** **Preliminaries and Mathematical** **102**
**Formulation** **103**
**2.1** **Measuring Calibration** **104**
Calibration refers to the alignment between the pre- **105**
dicted probability estimates of a model and their **106**
actual correctness or accuracy (Guo et al., 2017b). **107**
Formally, a perfectly calibrated model can be ex- **108**
pressed using the following equation, where X is **109**
the given input, Y is the true output, the model’s **110**
output is _Y[ˆ] and PN_ ( Y[ˆ] _X) = p is the probability,_ **111**
_|_
or “confidence”, over the model’s output. **112**
**076** - RQ 1: Does program-aided reasoning result
**077** _in significantly different calibration than text-_
**078** _based COT?_
**079** - RQ 2: Are the observed trends different across
**080** _OpenAI models and LLaMA models?_
**081** - RQ 3: Does the consistency of LLM genera
**082** _tions affect calibration? We examine this by_
**083** _measuring generation diversity and answer_
**084** _space entropy._
**085** Our results show that program-aided reasoners
**086** know what they know even better than standard
**087** text-based reasoners with COT. In particular, on
**088** OpenAI models, PAL exhibits not only superior
**089** accuracy but also a consistent enhancement in cal
**090** ibration of about 50%, over COT. Interestingly,
**091** the consistent improvement of calibration is not
**092** observed in LLaMA models. Still, we find that
**093** adjusting the temperature of sampling (similar to
**094** a widely used method of Platt scaling (Platt et al.,
**095** 1999), PAL improves with respect to accuracy and
**096** calibration. We also conduct a detailed analysis of
**097** these observations and find a correlation between
_Yˆ = Y_ _PN_ ( Y[ˆ] _X) = p_ = p, _p_ [0, 1]
_|_ _|_ _∀_ _∈_
(1)
**113**
In essence, Equation 1 conveys that if a perfectly **114**
calibrated model makes 100 predictions, and the **115**
confidence of each prediction is 0.6, then we ex- **116**
pect the accuracy to be also 0.6. Nevertheless, the **117**
model may exhibit varying confidence levels for **118**
each sample. Therefore, it is imperative to calcu- **119**
late calibration across all confidence scores. We **120**
estimate this probability by dividing the predictions **121**
into M separate and equally sized interval buckets **122**
based on their confidence levels. **123**
We use the expected calibration error (ECE), **124**
a common measure of (lack of) calibration, a **125**
weighted average of the discrepancy between each **126**
bucket’s accuracy and confidence. It is given in **127**
Equation 2 **128**
Here Bm is the m-th bucket that contains sam- **129**
ples whose probabilities of predictions fall in the **130**
2263
-----
**131** interval _mM−1_ _[,][ m]M_, where _[|][B]n[m][|]_ is Bm’s size rel
**132** ative to all the samples. acc (Bm) is the average
**133** accuracy of the samples in the m-th bucket, and
**134** conf (Bm) is the corresponding average confidence
**135** of the samples falling in the m-th bucket.
_M_
_Bm_
**136** _|_ _|_ acc (Bm) conf (Bm) (2)
_n_ _|_ _−_ _|_
_m=1_
X
**2.3** **Similarity and Answer Entropy** **176**
In addition to empirically evaluating the impact on **177**
accuracy and calibration, we conduct a qualitative **178**
analysis of the reasoning chains (the latent vari- **179**
able Z described previously). Here, we observe a **180**
consistent pattern, i.e. the correct answers corre- **181**
sponding to a question are often associated with **182**
similar generations. **183**
We find that this observation aligns with the find- **184**
ing made by Li et al. (2022a), that there are nu- **185**
merous ways in which solutions can be incorrect. **186**
In contrast, correct solutions tend to exhibit more **187**
uniform behaviour. **188**
To empirically validate this observation, we em- **189**
ployed sentence embeddings generated from the **190**
_all-MiniLM-v6 model to compute the average sim-_ **191**
ilarity among the generations/reasoning chains, **192**
equivalent to calculating similarity over latent vari- **193**
ables Z. **194**
Furthermore, to gain deeper insights into the re- **195**
lationship between similarity in generations and **196**
corresponding answers, we compute the entropy **197**
_H(A) of the answer space where P_ (ai) refers to **198**
the probability of the i[th] answer in K answers ob- **199**
tained by extraction or program execution for a **200**
given sample. **201**
_K_
_H(A) = −_ _P_ (ai) · log2 P (ai) (4) **202**
_i=1_
X
This allowed us to investigate whether the ob- **203**
served similarity in the latent variable space Z **204**
leads to a lower entropy within the answer space. **205**
**3** **Experimental Design** **206**
**3.1** **Models** **207**
We compare the calibration and accuracy of two **208**
different prompting strategies - CoT and PaL on **209**
an equal number of closed-source and open-source **210**
models. The open source models used in exper- **211**
imentation are LLaMA2-13B, LLaMA2-70B (Tou- **212**
vron et al., 2023). and the closed-source models **213**
are gpt-3.5-turbo, text-davinci-003 (Brown **214**
et al., 2020). It should be noted that all models **215**
have received some form of supervision from code **216**
during pre-training (OpenAI, 2023; Touvron et al., **217**
2023), in addition to being primarily trained on text. **218**
For LLaMA models, we leveraged vLLM (Kwon **219**
et al., 2023) for distributed inference using A6000 **220**
GPU(s). **221**
**137** Consider a setup where we have buckets with a
**138** width of 0.1. All instances where a model assigns
**139** probabilities between 0.4 and 0.5 will be allocated
**140** to the bucket B5 or the bucket encompassing prob
**141** abilities between 0.4 and 0.5. Subsequently, the
**142** average accuracy for the instances in these buck
**143** ets and the average probability/confidence is com
**144** puted. The absolute difference of the accuracy and
**145** confidence is multiplied by the proportion of total
**146** instances in a bucket. This process is repeated for
**147** every bucket, and the individual scores are summed
**148** up to calculate ECE.
**149** **2.2** **Self-consistency as a measure of**
**150** **confidence**
**151** Self-consistency (Wang et al., 2022) is a natural
**152** language reasoning technique that uses chain-of
**153** thought prompting to generate multiple paths for
**154** reasoning. This process aims to select the most
**155** consistent answer by sampling and marginalizing.
**156** Here, we use a latent variable Z to represent the
**157** reasoning chain/programs. Y is the answer that is
**158** either extracted in case of COT or obtained after
**159** execution in case of PAL. We marginalize over Z
**160** by taking a majority vote over answers. Thus, we
**161** rely on majority voting over the answers to obtain
**162** confidence estimates for each sample.
**163** _K is a hyperparameter that controls the num-_
**164** ber of generations (referenced in equation 3). The
**165** higher the value of K, the better our approximation
**166** of the probability of each sample. Figure 2 shows
**167** an overview of this process.
_K_
**168** _P_ ( Y[ˆ]0 _Z0) = [1]_ I _Yˆi = Y[ˆ]0_ (3)
_|_ _K_
_i=0_
X n o
**169** Wang et al. (2022) and Xiong et al. (2023a) sug
**170** gest that self-consistency can be an effective way
**171** to elicit confidence from models. Hence, given the
**172** lack of per-token log probabilities in closed LMs
**173** like gpt-3.5-turbo and text-davinci-003, we
**174** adopt self-consistency as a proxy measure for cali
**175** bration.
2264
-----
Dataset Category # Samples Example
GSM8K (Cobbe et al., 2021) Arithmetic 1319 Q: A robe takes 2 bolts of blue fiber and half that much white fiber.
How many bolts in total does it take? A: 3
GSM8K Hard (Gao et al., 2022) Arithmetic 1319 Q: A robe takes 2287720 bolts of blue fiber and half that much white fiber.
How many bolts in total does it take? A: 3431580
Date Understanding (Suzgun et al., 2022a) Symbolic 360 Q: Yesterday was April 30, 2021.
What is the date today in MM/DD/YYYY? A: 05/01/2021
Object Counting (Suzgun et al., 2022a) Algorithmic 250 Q: I have three couches, a lamp, a stove, a table, a fridge,
and a microwave. How many objects do I have? A: 8
Repeat Copy (Suzgun et al., 2022a) Algorithmic 32 Q: say python twice and data once, and then repeat all of this three times.
A: python python data python python data python python data
Table 1: Datasets with their examples and categories.
**222** **3.2** **Hyperparameters**
**223** For our experiments, we set temperature (T) as 1.0
**224** and the probability (p) for nucleus sampling (Holtz
**225** man et al., 2020) as 1.0. Selecting a temperature of
**226** 1.0 enables direct sampling from the model as no
**227** scaling of probabilities is involved, as seen from
**228** Equation 5. Here, zi refers to the logit for the ith
**229** token generated, and N is the vocabulary size.
_zi_
_e_ _T_
**230** _σ (zi) =_ _zj_ (5)
_N_
_j=0_ _[e]_ _T_
**231** For use K = 10 generations per sample for all
P
**232** datasets. We set the maximum number of tokens
**233** (input + output) for each generation to 1024.
**234** **3.3** **Tasks**
**235** We examined reasoning tasks encompassing sev
**236** eral challenges, including arithmetic, algorithmic,
**237** and symbolic reasoning. We use five datasets
**238** that cover these different kinds of reasoning tasks.
**239** The arithmetic reasoning datasets include GSM8K
**240** (Cobbe et al., 2021) and GSM8K Hard (Gao et al.,
**241** 2022). The algorithmic reasoning tasks include
**242** _Object-Counting (Suzgun et al., 2022a) and Repeat-_
**243** _Copy (Suzgun et al., 2022a)._ We used Date
**244** _Understanding as a Symbolic Reasoning Dataset_
**245** (Suzgun et al., 2022a). Specific information about
**246** the datasets used can be found in Table 1.
**247** **3.4** **Prompt Design**
**248** We provide all models with natural language
**249** chain-of-thought (CoT) prompts and code-based
**250** Program-Aided Language Model (PaL) prompts.
**251** For datasets where CoT prompts are available in
**252** their original form, we use them as presented in
**253** the original paper (Wei et al., 2022). We modify
**254** these prompts for other datasets to suit the specific
**255** task while maintaining their original format. For
**256** PaL prompts, we use and adapt the code prompts
**257** provided in (Gao et al., 2022). The prompts are
included in Appendix A. **258**
**4** **Results** **259**
We investigate two model types: OpenAI models **260**
and LLaMA models along with the two different **261**
prompting strategies - PAL and COT. **262**
**4.1** **Effect of prompting style on Calibration** **263**
In this section, we look at the first two RQs: **264**
**_RQ 1: Does one prompting style result in signifi-_** **265**
_cantly better calibration than the other?_ **266**
**_RQ 2: Are the observed calibration trends different_** **267**
_across OpenAI models and LLaMA models?_ **268**
Table 2 shows results for OpenAI models, we **269**
observe that PAL prompting improves both cali- **270**
bration and accuracy across all datasets. We see **271**
approximately 50% relative reduction in calibra- **272**
tion error and an average improvement of 18.42% **273**
in accuracy. **274**
In Figure 3, we show reliability plots which il- **275**
lustrate improved calibration, with the reliability **276**
curves for PAL prompting consistently aligning **277**
closer to the ideal reliability curve as compared **278**
to COT across datasets. While PAL shows a no- **279**
table gain of 14.83% in accuracy across all datasets **280**
for LLaMA models, it shows better calibration in **281**
only half of our settings. Overall, for both OpenAI **282**
models and LLaMA models, we observe that PAL **283**
leads to better calibration than COT for 75% of the **284**
settings. The reliability plots for all datasets for **285**
the models gpt-3.5-turbo and LLaMA2-70B can **286**
be seen in Appendix Section E, D. **287**
**Effect of PAL on calibration controlling for ac-** **288**
**curacy** One reasonable hypothesis is that PAL is **289**
improving calibration because it achieves higher **290**
accuracy, and more accurate models can be better **291**
calibrated. To examine this hypothesis, we con- **292**
duct statistical analysis using mixed linear models **293**
(McLean et al., 1991), which allows us to consider **294**
2265
-----
|Name|Score Model|GSM8K Object-Counting Repeat-Copy Date-Understanding GSM8K Hard|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|||CoT PaL|CoT PaL|CoT PaL|CoT PaL|CoT PaL|
|LLaMA2-70B LLaMA2-13B text-davinci-003 gpt-3.5-turbo|ECE (↓) LLaMA ACC (↑) LLaMA SIM (↑) LLaMA ENT (↓) LLaMA|0.19 0.07 59.28 63.91 72.20 92.40 2.24 1.92|0.17 0.14 76.00 92.40 94.43 94.72 1.00 0.76|0.18 0.23 40.62 71.88 87.10 90.58 1.93 2.00|0.09 0.18 66.66 70.18 86.87 82.15 1.44 1.54|0.07 0.03 21.45 40.62 92.28 74.32 2.85 2.17|
||ECE (↓) LLaMA ACC (↑) LLaMA SIM (↑) LLaMA ENT (↓) LLaMA|0.06 0.08 27.0 34.34 76.6 93.3 2.83 2.49|0.08 0.06 56.4 81.6 93.2 95.3 1.52 0.85|0.11 0.17 34.37 53.12 89.8 88.6 2.43 2.47|0.06 0.05 48.24 50.41 79.5 84.2 2.23 2.06|0.12 0.14 6.67 25.55 74.0 92.32 2.42 3.06|
||ECE (↓) OpenAI ACC (↑) OpenAI SIM (↑) OpenAI ENT (↓) OpenAI|0.04 0.03 65.65 76.49 90.5 97.8 1.27 0.79|0.29 0.02 59.21 98.00 99.1 99.8 0.36 0.02|0.20 0.06 67.23 93.75 96.2 98.2 1.38 0.44|0.19 0.11 60.70 72.35 92.4 97.4 0.71 0.64|0.15 0.07 23.95 71.27 89.8 97.9 2.31 0.81|
||ECE (↓) OpenAI ACC (↑) OpenAI SIM (↑) OpenAI ENT (↓) OpenAI|0.05 0.03 84.00 82.40 94.40 97.80 0.57 0.49|0.38 0.03 82.40 97.20 99.10 98.60 0.59 0.048|0.18 0.16 56.25 68.75 97.70 97.90 1.15 0.35|0.17 0.13 61.51 77.23 95.3 97.6 0.50 0.36|0.13 0.05 55.21 62.91 90.60 95.40 1.65 2.43|
Table 2: Comparison of Expected Calibration Error (ECE (↓) ), Accuracy (ACC (↑) ), Cosine Similarity (SIM (↑) )
and Answer Entropy (ENT (↓) ) across datasets. The darker blue shade highlights better performing prompting
technique.
**295** the significance of varying the prompting strategy
**296** while controlling for accuracy as a confounding
**297** factor.
**298** Upon analyzing the results in Table 3, we ob
**299** serve that, when treating the prompting style as
**300** a fixed effect, PAL exhibits a negative coefficient
**301** of -0.103 (p=0.0) for OpenAI models, which is
**302** statistically significant with a threshold of p=0.05.
**303** This implies that PAL contributes to the reduction
**304** in ECE and has a positive impact on calibration.
**305** On the contrary, for LLaMA models, we did not
**306** find that PAL had a statistically significant effect on
**307** ECE after controlling for accuracy. Across LLaMA
**308** models and OpenAI models, PAL has a statistically
**309** significant (p=0.02) correlation of -0.067 with ECE,
**310** indicating that PAL helps increase calibration on
**311** the whole even when controlling for accuracy.
**312** To summarize, we see that PAL prompting has
**313** better calibration than COT prompting (–RQ1) .
**314** While PAL has improved calibration in all settings
**315** for OpenAI models, this trend is less consistent for
**316** LLaMA models (–RQ2).
|Model Type|LLaMA models OpenAI models Both|
|---|---|
|Fixed Effect (ECE vs Prompting Style)|PAL : -0.010 PAL : -0.103 PAL : -0.067|
|p-value|0.961 0.000 0.002|
Table 3: Statistical analysis using mixed linear models,
keeping ECE vs Prompting Style as a fixed effect and
accuracy as a random effect.
**317** **4.2** **Effect of generation diversity on**
**318** **calibration**
**319** In this section, we look at the third research ques
**320** tion: RQ 3: Does the consistency of LLM genera
**321** _tions affect calibration?_
**322** Qualitative analysis of the generations reveals
that PAL generations adhere to a consistent struc- **323**
ture that divides the problem-solving process into **324**
three distinct parts. This is depicted in Figure 4. In **325**
the first part, the model initializes the variables and **326**
sets up their initial values required for the calcu- **327**
lation. This part is straightforward due to syntac- **328**
tic constraints and remains largely similar across **329**
generations. In the second part, the model gener- **330**
ates the required logic by applying formulas and **331**
utilizing various operations to derive the desired **332**
result. Finally, in the third part, the model gener- **333**
ates the answer by assigning the calculated result **334**
to a variable and returning it, which, again, doesn’t **335**
vary much across generations. Hence, the diversity **336**
of the generation is mainly limited to the second **337**
part, making code more constrained in its genera- **338**
tion space compared to text. Therefore, there is a **339**
**standardized structure in the code generated by** **340**
language models with PaL prompts. **341**
**Lower generation diversity and answer entropy** **342**
**observed in prompting strategy with better cali-** **343**
**bration** To quantitatively analyze if code-based **344**
generations have lower generation diversity and **345**
lead to a narrower answer space, we computed **346**
aggregated cosine similarity scores for all the gen- **347**
erations and entropy over the answer space. For **348**
OpenAI models, we note that the cosine similarity **349**
scores with PAL are higher than the correspond- **350**
ing scores for COT. This observation suggests that **351**
code-based generations display a higher degree of **352**
similarity from a semantic perspective. Moreover, **353**
the answer entropy for PAL is lower than COT. **354**
This implies that similar generations that cluster to- **355**
gether in the semantic space (Li et al., 2022a) also **356**
converge to the equivalent solution space. This **357**
2266
-----
Figure 3: Reliability Plots for various structured reasoning tasks for the model gpt-3.5-turbo. The x-axis
represents confidence, and the y-axis represents accuracy.
**def solution ()** :
# Part 1: Initialize
num_glasses = 16
first_glass_price = 5
second_glass_discount = 0.6
# Part 2: Calculate
second_glass_price = first_glass_price *
second_glass_discount
pair_price = first_glass_price +
second_glass_price
num_pairs = num_glasses // 2
total_cost = num_pairs * pair_price
# Part 3: Result Generation
result = total_cost
**return result**
Figure 4: Typical output structure with PaL
**358** leads to lower uncertainty in the probability dis
**359** tribution of the answer space and, hence, lower
**360** entropy. From Table 2, we thus can see that PAL
**361** helps produce similar generations that converge to
**362** the same answer space, which is also consistently
**363** correct. Hence, it achieves better performance and
**364** provides more confidence in its predictions.
**365** For LLaMA models, we don’t see this trend of
**366** PAL having higher generation similarity and lower
**367** answer entropy for all datasets. However, for al
**368** most all settings for LLaMA models and OpenAI
**369** models, the prompting strategy that produces more
**370** similar generations and lower answer entropy is
also more calibrated. **371**
To summarize, it is evident that lower generation **372**
diversity and lower answer entropy are correlated **373**
with higher calibration. (–RQ3) **374**
**Better calibration observed for PAL when induc-** **375**
**ing similarity in generations for LLaMA models** **376**
We observe that for OpenAI models, PAL is not **377**
only more accurate but also more calibrated than **378**
COT. Consequently, we explore whether the re- **379**
duction in generation diversity, achievable through **380**
lower temperatures, can contribute to improved cal- **381**
ibration for LLaMA models. **382**
We perform a parameter sweep across tempera- **383**
ture values between 0.1 and 0.7 with a step size of **384**
0.2. We show the variation of accuracy, calibration, **385**
generation similarity, and answer entropy for two **386**
datasets in Figure 5. The plots for the remaining **387**
datasets are available in Appendix B, Figure 6. We **388**
can see that we obtain better calibration for both **389**
the LLaMA models in both PAL and COT for tem- **390**
peratures below 1.0. From Tables 4 and 5, we note **391**
that in the majority of runs with T < 1.0, PAL is **392**
better calibrated than COT. Considering accuracy **393**
and calibration, optimal performance is achieved **394**
at different temperatures for each dataset. For most **395**
T values, the similarity scores are higher while cor- **396**
responding answer entropy values are lower for **397**
PAL compared to COT. This mirrors the pattern **398**
2267
-----
|Temp|GSM8K Object-Counting Repeat-Copy Date-Understanding GSM8K Hard|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||CoT PaL|CoT PaL|CoT PaL|CoT PaL|CoT PaL|
|ECE ACC 0.7 SIM ENT|0.101 0.07 66.03 67.9 85.07 97.47 1.60 1.48|0.076 0.03 77.6 93.2 98.53 99.42 0.55 0.21|0.14 0.12 53.1 75.0 93.78 94.81 1.46 1.35|0.12 0.09 74.5 76.42 89.62 96.16 0.88 0.80|0.18 0.03 27.14 52.91 83.28 97.29 2.43 1.72|
|ECE ACC 0.5 SIM ENT|0.049 0.036 66.94 67.24 88.69 98.25 1.35 1.19|0.103 0.059 77.23 92.4 99.17 99.85 0.39 0.12|0.112 0.075 59.3 68.75 97.09 96.81 1.09 0.99|0.114 0.063 73.44 77.2 92.49 97.97 0.60 0.52|0.139 0.104 27.7 51.63 87.65 98.2 2.18 1.39|
|ECE ACC 0.3 SIM ENT|0.057 0.097 64.89 63.38 91.91 98.75 1.087 0.960|0.140 0.064 78.8 91.2 99.51 99.94 0.238 0.056|0.194 0.113 53.12 71.87 97.73 98.27 0.780 0.504|0.153 0.139 72.62 76.42 95.18 99.02 0.420 0.317|0.230 0.206 26.16 49.28 91.14 98.75 1.866 1.076|
|ECE ACC 0.1 SIM ENT|0.219 0.257 58.6 58.37 95.79 99.37 0.661 0.526|0.188 0.07 77.2 90.4 99.82 99.98 0.085 0.026|0.278 0.156 53.12 68.75 99.28 99.64 0.288 0.173|0.233 0.176 69.91 78.32 98.21 99.68 0.195 0.137|0.418 0.380 23.5 45.87 95.31 99.35 1.179 0.540|
Table 4: Results of temperature scaling for LLaMA2-70B. The darker blue shade highlights better performing
prompting technique.
**399** observed for OpenAI models. For LLaMA-2 13b,
**400** PAL displays better calibration than COT at lower
**401** temperatures. However, the optimal temperature
**402** for obtaining the best performance for calibration
**403** and accuracy is still T=1.0.
**404** For LLaMA-2 70b, optimal temperature values in
**405** our runs for calibration are either 0.5 or 0.7, while
**406** extreme values (0.1, 1.0) yield lower calibration
**407** and accuracy performance. We can, therefore see
**408** that scaling temperatures in the LLaMA models
**409** can help us to obtain better calibration for PAL,
**410** specifically for the LLaMA-2 70b, which already
**411** performs better than COT on these reasoning tasks.
**412** Thus, we do see that lower generation diversity and
**413** lower answer entropy lead to higher calibration up
**414** to a certain point, after which it negatively affects
**415** the calibration. (–RQ3)
**416** **5** **Related Work**
**417** **5.1** **Prompting Strategies for Reasoning**
**418** Recent developments in language models have in
**419** troduced various methods to enhance their reason
**420** ing abilities. One such method is CoT (Wei et al.,
**421** 2022), which helps models generate a series of
**422** intermediate steps to solve problems. CoT has
**423** demonstrated improved performance in arithmetic,
**424** common sense, and symbolic reasoning tasks.
**425** There are approaches such as PaL (Gao et al., 2022)
**426** and Program-of-thoughts (PoT) (Chen et al., 2022),
**427** which go a step further by generating programs
**428** as intermediate steps and using an interpreter to
**429** process them. Code as a medium of reasoning has
**430** shown considerable promise, evidenced by better
**431** performance over chain-of-thought style prompting
strategies in several recent studies (Madaan et al., **432**
2022; Gao et al., 2022; Lyu et al., 2023; Zhang **433**
et al., 2023a,b). Unlike these works, our primary **434**
goal in this paper is to understand the effect of code **435**
prompts on calibration. **436**
**5.2** **Calibration in Language Models** **437**
Calibration has been extensively studied in struc- **438**
tured prediction problems, such as named entity **439**
recognition and part of speech tagging (Jagannatha **440**
and Yu, 2020), as well as in natural language un- **441**
derstanding tasks, like question answering and text **442**
classification (Kamath et al., 2020; Kong et al., **443**
2020; Desai and Durrett, 2020). More recently, **444**
studies have focused on calibrating language mod- **445**
els when used as generators (Jiang et al., 2021; **446**
Zhao et al., 2021). Additionally, the study by Ka- **447**
davath et al. (2022) explored the likelihood of a **448**
model knowing the answer before proposing a re- **449**
sponse. However, these approaches typically rely **450**
on access to the model’s logits. **451**
In contrast, the work by (Tian et al., 2023) inves- **452**
tigates verbalized probability estimates to assess **453**
the calibration of large language models without **454**
needing access to logits. This involves querying **455**
the model about its confidence in the answers it **456**
generates. Furthermore, (Xiong et al., 2023b) in- **457**
troduced self-consistency-based methods for cali- **458**
bration, demonstrating their superior performance **459**
compared to verbalized methods. In our research, **460**
we adopt self-consistency as the method of choice **461**
for measuring calibration. **462**
2268
-----
Figure 5: Trends seen in temperature scaling for the model LLaMA2-70B. Across datasets, the accuracy and
calibration improve upon lowering the temperature to a certain extent. This is in line with having lower generation
similarity and lower answer entropy. The optimal temperatures seen are 0.5 and 0.7 across datasets. For other
datasets, refer Appendix, Figure 6.
**463** **5.3** **Utilizing Language for Code Generation**
**464** The exploration of using natural language for code
**465** generation has taken diverse approaches in research.
**466** Initial efforts involved rule-based, predictive and
**467** deep-learning variations (Gulwani and Marron,
**468** 2014; Woods, 1973; Zelle and Mooney, 1996; Lin
**469** et al., 2017; Rabinovich et al., 2017). However,
**470** performance enhancements were observed using
**471** pre-trained models trained on code-based datasets
**472** (Chen et al., 2021; Nijkamp et al., 2022; Gao et al.,
**473** 2022; Li et al., 2023). Employing pre-trained lan
**474** guage models (LMs) for code generation as a way
**475** to solve tasks that require step-by-step structuring
**476** and various forms of reasoning has proven to be
**477** particularly effective (Ni et al., 2023a; Gao et al.,
**478** 2022; Ni et al., 2023b).
**479** Intermediate execution results from code have
**480** been used for training (Chen et al., 2018) and in
**481** ference (Wang et al., 2018). Majority-based voting
**482** on the results of code executions (which is the self
**483** consistency-based methodology we employ) has
**484** also been shown to be an effective technique for se
**485** lecting the right candidate (Li et al., 2022b; Cobbe
**486** et al., 2021; Shi et al., 2022).
**487** **6** **Conclusion**
**488** In this study, we explore the impact of two distinct
**489** prompting styles, namely PAL and COT, on the
**490** calibration of OpenAI models and LLaMA mod
**491** els. Our investigation spans 5 reasoning datasets,
**492** employing self-consistency as the methodology for
**493** eliciting calibration. We analyze four different met
**494** rics - calibration (ECE), accuracy (ACC), average
similarity in generations (SIM), and answer en- **495**
tropy (ENT) . Our findings are as follows: **496**
- **_RQ 1: Does one prompting style result in_** **497**
_significantly better calibration than the other?_ **498**
Empirical results show that PAL generally has **499**
higher calibration and accuracy for 82.5% of **500**
the cases across OpenAI and LLaMA models **501**
for a varied range of temperatures. **502**
- RQ 2: Are the observed calibration trends **503**
_different across OpenAI models and LLaMA_ **504**
_models? We observed that OpenAI models are_ **505**
in general better calibrated for the reasoning **506**
tasks with up to 19% improvement in ECE. **507**
- RQ 3: Does the consistency of LLM genera- **508**
_tions affect performance?_ PAL prompting **509**
shows a general trend of having greater simi- **510**
larity in the generation over COT, which we **511**
hypothesize could be due to the inherent struc- **512**
ture present in the code. We see that greater **513**
generation similarity is accompanied by lower **514**
answer entropy and lower ECE. **515**
We hope that this study will catalyze additional **516**
research aimed at holistically evaluating and gain- **517**
ing deeper insights into the role of prompts in vari- **518**
ous tasks and domains. **519**
**7** **Limitations** **520**
Access to OpenAI models is only available through **521**
an API which limits the ability to exactly con- **522**
trol the hyperparameters influencing the genera- **523**
2269
-----
**524** tions. Moreover, OpenAI models are not transpar
**525** ent, which limits the ability to study these models.
**526** Because of this lack of transparency it is also hard
**527** to draw conclusive insights about any comparisons
**528** between OpenAI models and LLaMA models In
**529** our study, we report the results from a single run
**530** but due to combination of utilizing temperature
**531** value of 1.0 and hardware induced stochasticity, it
**532** is possible to get varying results for a given model.
**533** **References**
**534** Tom Brown, Benjamin Mann, Nick Ryder, Melanie
**535** Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
**536** Neelakantan, Pranav Shyam, Girish Sastry, Amanda
**537** Askell, et al. 2020. Language models are few-shot
**538** learners. Advances in neural information processing
**539** _systems, 33:1877–1901._
**540** Mark Chen, Jerry Tworek, Heewoo Jun, Qiming
**541** Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka
**542** plan, Harri Edwards, Yuri Burda, Nicholas Joseph,
**543** Greg Brockman, et al. 2021. Evaluating large
**544** language models trained on code. arXiv preprint
**545** _arXiv:2107.03374._
**546** Wenhu Chen, Xueguang Ma, Xinyi Wang, and
**547** William W. Cohen. 2022. Program of thoughts
**548** prompting: Disentangling computation from rea
**549** soning for numerical reasoning tasks. _ArXiv,_
**550** abs/2211.12588.
**551** Xinyun Chen, Chang Liu, and Dawn Xiaodong Song.
**552** [2018. Execution-guided neural program synthesis.](https://api.semanticscholar.org/CorpusID:53317540)
**553** In International Conference on Learning Representa
**554** _tions._
**555** Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
**556** Jacob Hilton, Reiichiro Nakano, Christopher Hesse,
**557** and John Schulman. 2021. Training verifiers to solve
**558** math word problems. ArXiv, abs/2110.14168.
**559** Shrey Desai and Greg Durrett. 2020. Calibra
**560** tion of pre-trained transformers. _arXiv preprint_
**561** _arXiv:2003.07892._
**562** Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
**563** Pengfei Liu, Yiming Yang, Jamie Callan, and Gra
**564** ham Neubig. 2022. Pal: Program-aided language
**565** models. ArXiv, abs/2211.10435.
**566** Sumit Gulwani and Mark Marron. 2014. Nlyze: Inter
**567** active programming by natural language for spread
**568** sheet data analysis and manipulation. In Proceedings
**569** _of the 2014 ACM SIGMOD international conference_
**570** _on Management of data, pages 803–814._
**571** Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Wein
**572** berger. 2017a. On calibration of modern neural net
**573** works. In International conference on machine learn
**574** _ing, pages 1321–1330. PMLR._
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Wein- **575**
berger. 2017b. On calibration of modern neural **576**
networks. In International Conference on Machine **577**
_Learning._ **578**
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and **579**
[Yejin Choi. 2020. The curious case of neural text](http://arxiv.org/abs/1904.09751) **580**
[degeneration.](http://arxiv.org/abs/1904.09751) **581**
Abhyuday Jagannatha and Hong Yu. 2020. Calibrat- **582**
ing structured output predictors for natural language **583**
processing. In Proceedings of the conference. As- **584**
_sociation for Computational Linguistics. Meeting,_ **585**
volume 2020, page 2078. NIH Public Access. **586**
Zhengbao Jiang, J. Araki, Haibo Ding, and Graham **587**
Neubig. 2020. How can we know when language **588**
models know? on the calibration of language models **589**
for question answering. Transactions of the Associa- **590**
_tion for Computational Linguistics, 9:962–977._ **591**
Zhengbao Jiang, Jun Araki, Haibo Ding, and Graham **592**
Neubig. 2021. How can we know when language **593**
models know? on the calibration of language models **594**
for question answering. Transactions of the Associa- **595**
_tion for Computational Linguistics, 9:962–977._ **596**
Saurav Kadavath, Tom Conerly, Amanda Askell, Tom **597**
Henighan, Dawn Drain, Ethan Perez, Nicholas **598**
Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli **599**
Tran-Johnson, et al. 2022. Language models **600**
(mostly) know what they know. _arXiv preprint_ **601**
_arXiv:2207.05221._ **602**
Amita Kamath, Robin Jia, and Percy Liang. 2020. Se- **603**
lective question answering under domain shift. arXiv **604**
_preprint arXiv:2006.09462._ **605**
Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie **606**
Lyu, Tuo Zhao, and Chao Zhang. 2020. Cali- **607**
brated language model fine-tuning for in-and out-of- **608**
distribution data. arXiv preprint arXiv:2010.11506. **609**
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying **610**
Sheng, Lianmin Zheng, Cody Hao Yu, Joseph E. Gon- **611**
[zalez, Haotong Zhang, and Ion Stoica. 2023. Effi-](https://api.semanticscholar.org/CorpusID:261697361) **612**
[cient memory management for large language model](https://api.semanticscholar.org/CorpusID:261697361) **613**
[serving with pagedattention. Proceedings of the 29th](https://api.semanticscholar.org/CorpusID:261697361) **614**
_Symposium on Operating Systems Principles._ **615**
Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas **616**
Muennighoff, Denis Kocetkov, Chenghao Mou, Marc **617**
Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. **618**
2023. Starcoder: may the source be with you! arXiv **619**
_preprint arXiv:2305.06161._ **620**
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, **621**
Julian Schrittwieser, Rémi Leblond, Tom Eccles, **622**
James Keeling, Felix Gimeno, Agustin Dal Lago, **623**
et al. 2022a. Competition-level code generation with **624**
alphacode. Science, 378(6624):1092–1097. **625**
Yujia Li, David H. Choi, Junyoung Chung, Nate Kush- **626**
man, Julian Schrittwieser, Rémi Leblond, Tom, Ec- **627**
cles, James Keeling, Felix Gimeno, Agustin Dal **628**
Lago, Thomas Hubert, Peter Choy, Cyprien de, **629**
2270
-----
**630** Masson d’Autume, Igor Babuschkin, Xinyun Chen,
**631** Po-Sen Huang, Johannes Welbl, Sven Gowal,
**632** Alexey, Cherepanov, James Molloy, Daniel Jaymin
**633** Mankowitz, Esme Sutherland Robson, Pushmeet
**634** Kohli, Nando de, Freitas, Koray Kavukcuoglu, and
**635** [Oriol Vinyals. 2022b. Competition-level code gener-](https://api.semanticscholar.org/CorpusID:246527904)
**636** [ation with alphacode. Science, 378:1092 – 1097.](https://api.semanticscholar.org/CorpusID:246527904)
**637** Xi Victoria Lin, Chenglong Wang, Deric Pang, Kevin
**638** Vu, and Michael D Ernst. 2017. Program synthe
**639** sis from natural language using recurrent neural net
**640** works. University of Washington Department of Com
**641** _puter Science and Engineering, Seattle, WA, USA,_
**642** _Tech. Rep. UW-CSE-17-03-01._
**643** Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang,
**644** Delip Rao, Eric Wong, Marianna Apidianaki, and
**645** Chris Callison-Burch. 2023. Faithful chain-of
**646** thought reasoning. arXiv preprint arXiv:2301.13379.
**647** Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang,
**648** and Graham Neubig. 2022. Language models of
**649** code are few-shot commonsense learners. ArXiv,
**650** abs/2210.07128.
**651** Robert A McLean, William L Sanders, and Walter W
**652** Stroup. 1991. A unified approach to mixed linear
**653** models. The American Statistician, 45(1):54–64.
**654** Ansong Ni, Srini Iyer, Dragomir Radev, Veselin Stoy
**655** anov, Wen-tau Yih, Sida Wang, and Xi Victoria Lin.
**656** 2023a. Lever: Learning to verify language-to-code
**657** generation with execution. In International Con
**658** _ference on Machine Learning, pages 26106–26128._
**659** PMLR.
**660** Ansong Ni, Pengcheng Yin, Yilun Zhao, Martin Rid
**661** dell, Troy Feng, Rui Shen, Stephen Yin, Ye Liu,
**662** Semih Yavuz, Caiming Xiong, Shafiq R. Joty, Yingbo
**663** Zhou, Dragomir R. Radev, and Arman Cohan.
**664** [2023b. L2ceval: Evaluating language-to-code gener-](https://api.semanticscholar.org/CorpusID:263310373)
**665** [ation capabilities of large language models. ArXiv,](https://api.semanticscholar.org/CorpusID:263310373)
**666** abs/2309.17446.
**667** Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan
**668** Wang, Yingbo Zhou, Silvio Savarese, and Caiming
**669** Xiong. 2022. A conversational paradigm for program
**670** synthesis.
**671** OpenAI. 2023. Openai documentation.
**672** [https://platform.openai.com/docs/](https://platform.openai.com/docs/model-index-for-researchers)
**673** [model-index-for-researchers.](https://platform.openai.com/docs/model-index-for-researchers)
**674** John Platt et al. 1999. Probabilistic outputs for support
**675** vector machines and comparisons to regularized like
**676** lihood methods. Advances in large margin classifiers,
**677** 10(3):61–74.
**678** Maxim Rabinovich, Mitchell Stern, and Dan Klein.
**679** 2017. Abstract syntax networks for code gen
**680** eration and semantic parsing. _arXiv preprint_
**681** _arXiv:1704.07535._
**682** Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke
**683** [Zettlemoyer, and Sida I. Wang. 2022. Natural lan-](https://api.semanticscholar.org/CorpusID:248377325)
**684** [guage to code translation with execution.](https://api.semanticscholar.org/CorpusID:248377325) _ArXiv,_
**685** abs/2204.11454.
Mirac Suzgun, Nathan Scales, Nathanael Schärli, Se- **686**
bastian Gehrmann, Yi Tay, Hyung Won Chung, **687**
Aakanksha Chowdhery, Quoc V Le, Ed H Chi, Denny **688**
Zhou, et al. 2022a. Challenging big-bench tasks **689**
and whether chain-of-thought can solve them. arXiv **690**
_preprint arXiv:2210.09261._ **691**
Mirac Suzgun, Nathan Scales, Nathanael Scharli, Se- **692**
bastian Gehrmann, Yi Tay, Hyung Won Chung, **693**
Aakanksha Chowdhery, Quoc V. Le, Ed Huai hsin **694**
Chi, Denny Zhou, and Jason Wei. 2022b. Challeng- **695**
ing big-bench tasks and whether chain-of-thought **696**
can solve them. ArXiv, abs/2210.09261. **697**
Katherine Tian, Eric Mitchell, Allan Zhou, Archit **698**
Sharma, Rafael Rafailov, Huaxiu Yao, Chelsea Finn, **699**
and Christopher D Manning. 2023. Just ask for cali- **700**
bration: Strategies for eliciting calibrated confidence **701**
scores from language models fine-tuned with human **702**
feedback. arXiv preprint arXiv:2305.14975. **703**
Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- **704**
bert, Amjad Almahairi, Yasmine Babaei, Nikolay **705**
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti **706**
Bhosale, et al. 2023. Llama 2: Open founda- **707**
tion and fine-tuned chat models. _arXiv preprint_ **708**
_arXiv:2307.09288._ **709**
Chenglong Wang, Kedar Tatwawadi, Marc **710**
Brockschmidt, Po-Sen Huang, Yi Mao, Olek- **711**
[sandr Polozov, and Rishabh Singh. 2018. Robust](https://api.semanticscholar.org/CorpusID:52184274) **712**
text-to-sql generation with execution-guided **713**
[decoding. arXiv: Computation and Language.](https://api.semanticscholar.org/CorpusID:52184274) **714**
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, **715**
Ed Huai hsin Chi, and Denny Zhou. 2022. Self- **716**
consistency improves chain of thought reasoning in **717**
language models. ArXiv, abs/2203.11171. **718**
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten **719**
Bosma, Ed Huai hsin Chi, F. Xia, Quoc Le, and **720**
Denny Zhou. 2022. Chain of thought prompting **721**
elicits reasoning in large language models. ArXiv, **722**
abs/2201.11903. **723**
William A Woods. 1973. Progress in natural language **724**
understanding: an application to lunar geology. In **725**
_Proceedings of the June 4-8, 1973, national computer_ **726**
_conference and exposition, pages 441–450._ **727**
Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie **728**
Fu, Junxian He, and Bryan Hooi. 2023a. Can **729**
llms express their uncertainty? an empirical eval- **730**
uation of confidence elicitation in llms. _ArXiv,_ **731**
abs/2306.13063. **732**
Miao Xiong, Zhiyuan Hu, Xinyang Lu, Yifei Li, Jie **733**
Fu, Junxian He, and Bryan Hooi. 2023b. Can llms **734**
express their uncertainty? an empirical evaluation **735**
of confidence elicitation in llms. _arXiv preprint_ **736**
_arXiv:2306.13063._ **737**
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, **738**
Thomas L. Griffiths, Yuan Cao, and Karthik **739**
Narasimhan. 2023. Tree of thoughts: Deliberate **740**
problem solving with large language models. ArXiv, **741**
abs/2305.10601. **742**
2271
-----
**743** John M Zelle and Raymond J Mooney. 1996. Learning
**744** to parse database queries using inductive logic pro
**745** gramming. In Proceedings of the national conference
**746** _on artificial intelligence, pages 1050–1055._
**747** Li Zhang, Liam Dugan, Hai Xu, and Chris Callison
**748** Burch. 2023a. Exploring the curious case of code
**749** prompts. ArXiv, abs/2304.13250.
**750** Li Zhang, Hai Xu, Yue Yang, Shuyan Zhou, Weiqiu
**751** You, Manni Arora, and Chris Callison-Burch. 2023b.
**752** Causal reasoning of entities and events in procedural
**753** texts. In Findings.
**754** Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and
**755** Sameer Singh. 2021. Calibrate before use: Improv
**756** ing few-shot performance of language models. In In
**757** _ternational Conference on Machine Learning, pages_
**758** 12697–12706. PMLR.
**759** Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei,
**760** Nathan Scales, Xuezhi Wang, Dale Schuurmans,
**761** Olivier Bousquet, Quoc Le, and Ed Huai hsin
**762** Chi. 2022. Least-to-most prompting enables com
**763** plex reasoning in large language models. _ArXiv,_
**764** abs/2205.10625.
2272
-----
**765** **A** **Prompts**
**766** The following sections display one example of
**767** the few-shot prompts used for each dataset across
**768** prompting styles.
**769** **A.1** **PAL Prompts**
**770** **A.1.1** **GSM8K/GSM8K-Hard**
def solution () :
"""Olivia has $23. She bought five bagels for $3 each. How much money does she have left?"""
money_initial = 23
bagels = 5
bagel_cost = 3
money_spent = bagels * bagel_cost
money_left = money_initial - money_spent
result = money_left
return result
**771**
**772** **A.1.2** **Object Counting**
# Q: I have a chair, two potatoes, a cauliflower, a lettuce head, two tables, a cabbage, two
_,→_ onions, and three fridges. How many vegetables do I have?
```
```
```
def solution () :
# note: I'm not counting the chair, tables, or fridges
vegetables_to_count = {{'potato': 2,'cauliflower': 1,'lettuce head': 1,'cabbage':
_,→_ 1,'onion': 2}}
return sum (vegetables_to_count.values () )
```
```
```
**773**
**774** **A.1.3** **Date Understanding**
# Q: 2015 is coming in 36 hours. What is the date one week from today in MM/DD/YYYY?
# If 2015 is coming in 36 hours, then today is 36 hours before.
today = datetime (2015, 1, 1) - relativedelta (hours=36)
# One week from today,
one_week_from_today = today + relativedelta (weeks=1)
# The answer formatted with %m/%d/%Y is
one_week_from_today.strftime ('%m/%d/%Y')
**775**
**776** **A.1.4** **Repeat Copy**
# Q: Repeat the word duck four times, but halfway through also say quack
```
```
```
def solution () :
result = []
for i in range (1, 5) :
result.append ("duck")
if i == 2:
result.append ("quack")
return " ".join (result)
```
```
```
**777**
2273
-----
**778** **A.2** **COT Prompts**
**779** **A.2.1** **GSM8K/GSM8K-Hard**
**Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be**
21 trees. How many trees did the grove workers plant today?
**A: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 -**
15 = 6. The answer is 6.
**780**
**781** **A.2.2** **Object Counting**
**Q: I have a chair, two potatoes, a cauliflower, a lettuce head, two tables, a cabbage, two onions, and three fridges. How**
many vegetables do I have?
**A: Chair, tables and fridges are not vegetables, so we are not counting them. Two potatoes, cauliflower, lettuce head,**
cabbage and two onions are vegetables, so they will be counted The total number of vegetables are 2 + 1 + 1 + 1 + 2 The
answer is: 7
**782**
**783** **A.2.3** **Date Understanding**
**Q: 2015 is coming in 36 hours. What is the date one week from today in MM/DD/YYYY?**
**A: If 2015 is coming in 36 hours, then it is coming in 2 days. 2 days before 01/01/2015 is 12/30/2014, so today is**
12/30/2014. So one week from today will be 01/05/2015. So the answer is 01/05/2015.
**784**
**785** **A.2.4** **Repeat Copy**
**Q: Repeat the word duck four times, but halfway through also say quack**
**A: On repeating the word duck four times we get: duck duck duck duck Halfway through if we say quack, we have to**
say quack in between the 2nd word and the 3rd word The answer is: duck duck quack duck duck
**786**
2274
-----
**787** **B** **Temperature Scaling Experiments -**
**788** **Line Plots**
Figure 6: Trends seen in temperature scaling for the datasets - GSM8K-Hard, Date-Understanding and Repeat-Copy
2275
-----
**789** **C** **Results of temperature scaling for**
**790** **LLaMA2-13B**
|Temp|GSM8K Object-Counting Repeat-Copy Date-Understanding GSM8K Hard|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||CoT PaL|CoT PaL|CoT PaL|CoT PaL|CoT PaL|
|0.7 ECE ACC SIM ENT|0.052 0.046 36.16 39.27 85.40 97.48 2.405 2.222|0.12 0.108 61.60 80.80 98.65 99.33 0.970 0.303|0.175 0.181 34.37 59.37 95.07 94.02 2.048 1.949|0.108 0.107 50.60 56.63 86.81 96.03 1.505 1.330|0.125 0.087 10.91 30.32 83.48 97.37 2.892 2.332|
|0.5 ECE ACC SIM ENT|0.099 0.093 36.08 37.07 88.70 98.10 2.144 1.976|0.171 0.135 60.80 82.00 99.21 99.83 0.722 0.158|0.131 0.106 34.37 59.30 97.12 97.74 1.455 1.311|0.134 0.183 53.92 55.00 89.94 97.63 1.141 0.942|0.177 0.1381 10.80 29.87 87.37 98.03 2.694 2.074|
|0.3 ECE ACC SIM ENT|0.160 0.108 33.81 30.72 91.51 98.54 1.826 1.618|0.231 0.154 62.40 81.20 99.56 99.93 0.475 0.080|0.200 0.206 37.50 65.63 97.62 96.81 0.967 1.19|0.219 0.304 55.28 50.40 93.02 98.41 79.04 63.78|0.246 0.231 10.16 27.97 90.48 98.44 2.389 1.716|
|0.1 ECE ACC SIM ENT|0.372 0.334 30.25 32.14 95.12 99.19 1.145 0.84|0.311 0.174 62.80 81.20 99.80 99.98 0.204 0.014|0.4969 0.1812 37.50 46.80 97.24 98.68 0.286 0.283|0.341 0.423 54.47 49.32 97.00 99.32 0.373 0.261|0.372 0.334 30.25 32.14 95.12 99.19 1.1458 0.840|
Table 5: Results of temperature scaling for LLaMA2-13B. The darker blue shade highlights better performing
prompting technique.
2276
-----
**791** **D** **Reliability Plots for LLaMA2-70B**
(a) GSM8k CoT (b) GSM8k PaL
(c) Date Understanding CoT (d) Date Understanding PaL
(e) Object Counting CoT (f) Object Counting PaL
(g) Repeat Copy CoT (h) Repeat Copy PaL
(i) GSM8k Hard CoT (j) GSM8k Hard PaL
Figure 7: Reliability plots for all the datasets using COT and PAL prompting for the model LLaMA2-70B
2277
-----
**792** **E** **Reliability Plots for gpt-3.5-turbo**
(a) GSM8k CoT (b) GSM8k PaL
(d) Date Understanding PaL
(c) Date Understanding CoT
(e) Object Counting CoT (f) Object Counting PaL
(g) Repeat Copy CoT (h) Repeat Copy PaL
(i) GSM8k Hard CoT (j) GSM8k Hard PaL
Figure 8: Reliability plots for all the datasets using COT and PAL prompting for the model gpt-3.5-turbo
2278
-----
| [
"Aman, Madaan",
"Anubha, Kabra",
"Sanketh, Rangreji",
"Yash, Mathur",
"Graham, Neubig",
"Kevin, Duh",
"Emmy, Liu",
"Helena, Gomez",
"Steven, Bethard"
] | 2024-06-01T00:00:00 | NAACL 2024 Long Papers | true | 1 | 0 | null | https://aclanthology.org/2024.naacl-long.125 | https://arxiv.org/abs/2311.09553 | https://www.semanticscholar.org/paper/541fca99845e8c04acd8550d2bdd1cbea55f3d92 |
Property Preserving Embedding of First-order Logic | N/A | null | null | [
"Cezary, Kaliszyk",
"Julian, Parsert",
"Stephanie, Autherith"
] | 2020-04-27T00:00:00 | null | false | 1 | 0 | null | https://easychair.org/publications/paper/Cwgq | null | https://www.semanticscholar.org/paper/ce81a2e8adf8f4d08ccb2f0885c71f6189f3c595 |
Proving Olympiad Algebraic Inequalities without Human Demonstrations | Solving Olympiad-level mathematical problems represents a significant advancement in machine intelligence and automated reasoning. Current machine learning methods, however, struggle to solve Olympiad-level problems beyond Euclidean plane geometry due to a lack of large-scale, high-quality datasets. The challenge is even greater in algebraic systems, which involves infinite reasoning spaces within finite conditions. To address these issues, we propose \textit{AIPS}, an \textit{Algebraic Inequality Proving System} capable of autonomously generating complex inequality theorems and effectively solving Olympiad-level inequality problems without requiring human demonstrations. During proof search in a mixed reasoning manner, a value curriculum learning strategy on generated datasets is implemented to improve proving performance, demonstrating strong mathematical intuitions. On a test set of 20 International Mathematical Olympiad-level inequality problems, AIPS successfully solved 10, outperforming state-of-the-art methods. Furthermore, AIPS automatically generated a vast array of non-trivial theorems without human intervention, some of which have been evaluated by professional contestants and deemed to reach the level of the International Mathematical Olympiad. Notably, one theorem was selected as a competition problem in a major city 2024 Mathematical Olympiad.All the materials are available at {\it \href{https://sites.google.com/view/aips}{sites.google.com/view/aips}}. | This work proposes AIPS, an Algebraic Inequality Proving System capable of autonomously generating complex inequality theorems and effectively solving Olympiad-level inequality problems without requiring human demonstrations. | ## Proving Olympiad Algebraic Inequalities without Human Demonstrations
**Chenrui Wei[1]** **Mengzhou Sun[2]** **Wei Wang[1]**
```
[email protected] [email protected] [email protected]
```
1State Key Laboratory of General Artificial Intelligence, BIGAI, Beijing, China
2Department of Mathematics, National University of Singapore
**Abstract**
Solving Olympiad-level mathematical problems represents a significant advancement in machine intelligence and automated reasoning. Current machine learning
methods, however, struggle to solve Olympiad-level problems beyond Euclidean
plane geometry due to a lack of large-scale, high-quality datasets. The challenge is
even greater in algebraic systems, which involve infinite reasoning spaces within
finite conditions. To address these issues, we propose AIPS, an Algebraic Inequality
_Proving System capable of autonomously generating complex inequality theorems_
and effectively solving Olympiad-level inequality problems without requiring human demonstrations. During proof search in a mixed reasoning manner, a value
curriculum learning strategy on generated datasets is implemented to improve proving performance, demonstrating strong mathematical intuitions. On a test set of 20
International Mathematical Olympiad-level inequality problems, AIPS successfully
solved 10, outperforming state-of-the-art methods. Furthermore, AIPS automatically generated a vast array of non-trivial theorems without human intervention,
some of which have been evaluated by professional contestants and deemed to
reach the level of the International Mathematical Olympiad. Notably, one theorem was selected as a competition problem in a major city 2024 Mathematical
[Olympiad. All the materials are available at sites.google.com/view/aips2.](https://sites.google.com/view/aips2)
**1** **Introduction**
One of the key milestones in the field of artificial intelligence is the capability to reason (Pearl 1998)
and prove theorems (Wu 1978; Chou et al. 2000; Trinh et al. 2024). However, theorem proving often
involves long reasoning chains, complex mathematical structures, intricate calculations, and infinite
reasoning spaces. Consequently, developing AI capable of proving complex mathematical theorems
requires sophisticated reasoning and the ability to navigate through an extensive search space to
construct a valid proof. The complexity of these problems lies in the need for effective heuristics and
strategies to manage the vast number of possible actions and the lengthy sequences of logical steps
necessary to arrive at a solution.
Existing work on grade school and college admission math problems has achieved notable success,
e.g., GSM8K (Cobbe et al. 2021) and SAT Math (Achiam et al. 2023), which demonstrate better
performance on tasks such as arithmetic and basic algebra. However, research focusd on solving
International Mathematical Olympiad (IMO)-level problems remains relatively sparse. Notable
efforts in this area include AlphaGeometry (Trinh et al. 2024), and GPT-f (Polu and Sutskever 2020)
on miniF2F (Zheng et al. 2021), which have made progress in solving Euclidean plane geometry at
the Olympiad level and various mathematical competition problems, respectively.
A significant challenge for learning-based methods in this domain is the scarcity of suitable datasets,
which limits the ability to train models effectively and hampers progress in achieving human-level
Preprint. Under review.
-----
performance on these high-difficulty problems. The miniF2F dataset (Zheng et al. 2021) includes
only 244 validation and 244 test mathematical problems from various competitions. AlphaGeometry
(Trinh et al. 2024) addresses this issue by synthesizing millions of theorems and proofs across
different levels of complexity to train a neural language model from scratch. Similarly, the INequality
Theorem proving benchmark, INT (Wu et al. 2020), can synthesize a theoretically unlimited number
of theorems and proofs in the domain of algebraic equalities and inequalities. However, INT focuses
on testing a learning-assisted theorem proving agent’s generalization ability rather than increasing
the difficulty to competition level.
Another significant challenge in automated theorem proving is designing effective search strategies to
navigate the vast space of possible proofs. Recent advancements have highlighted various approaches
to enhance search efficiency and proof success rates. Some studies have shown that incorporating
Monte Carlo Tree Search (MCTS) at test time can significantly aid in proving new theorems (Wu
et al. 2020). Inspired by the success of AlphaZero (Zhang and Yu 2020), other research has explored
HyperTree Proof Search (HTPS) (Lample et al.), which learns from previous proof searches through
online training, iteratively improving its strategy by learning which paths are more likely to lead to
successful proofs. Another innovative approach starts the proof search from the root goal that needs
to be proved (Polu and Sutskever 2020), expanding a maintained proof tree by prioritizing open goals
based on their cumulative log probability.
In this work, we introduce AIPS, an Algebraic Inequality Proving System, which can generate a
large number of high-quality theorems and solve IMO-level algebraic problems. AIPS focuses
on ternary and quaternary inequalities, excluding n-variable inequalities represented recursively in
formal verification systems. Among the generated theorems, some have proven to be very challenging,
with one selected for a major city’s 2024 Mathematical Olympiad. We present novel and challenging
inequality theorems discovered by AIPS in the supplementary material, which have been carefully
evaluated by IMO-level professional contestants and found to be comparable to IMO inequalities
from around the year 2000.
Additionally, AIPS incorporates a value network to evaluate newly generated inequalities, selecting
subgoal candidates based on the top scores provided by the value network. The value network is
trained on synthetic datasets with increasing difficulty in a curriculum manner. In our experiments,
AIPS proved difficult theorems up to the IMO level and solve 10 out of 20 problems in an IMO-level
inequality test, significantly surpassing the performance of previous Large Language Model-based
theorem provers (Polu and Sutskever 2020; Polu et al. 2022; Yang et al. 2024; Song et al. 2024).
The main contributions in this paper are summarized as follows:
1. We propose a symbolic deductive engine capable of efficiently generating high-quality and
solving high-difficulty algebraic inequality theorems. This engine addresses the bottleneck
of lacking large-scale, high-quality data in this field.
2. We demonstrate that a symbolic algebraic inequality prover can be significantly enhanced
under the guidance of a value network, especially when the value network is trained in a
curriculum manner.
3. Our AIPS can generate challenging and elegant inequality theorems, with one theorem
selected for a major city’s Mathematical Olympiad. AIPS can prove 10 out of 20 IMO-level
inequalities, outperforming state-of-the-art methods.
**2** **Related Work**
**Automatic Theorem Proving. Automatic theorem proving has been a focus of artificial intelligence**
since the 1950s (Harrison et al. 2014; Wu 1978). Modern theorem provers, based on tactic and
premise selection, search for proofs by interacting with proof assistants such as Lean (De Moura et al.
2015), Coq (Barras et al. 1999) and Isabelle (Nipkow et al. 2002). They struggle with the rapidly
expanding search space and the scarcity of high-quality datasets in most mathematical domains. The
challenge is even greater for proving algebraic inequalities, which involve complex computational
rules. Previous efforts to address this issue have focused on augmenting tactic selection and premise
prediction in interactive theorem provers (Polu and Sutskever 2020; Polu et al. 2022; Yang et al. 2024).
However, these provers have only been able to solve simple problems in this field. In this paper, our
-----
AIPS can solve highly complex algebraic inequality theorems up to the level of the International
Mathematical Olympiad(IMO).
**Datasets and Benchmarks for Theorem Proving. Formal mathematical libraries, such as Isarstep**
(Li et al. 2020), Mathlib (van Doorn et al. 2020), and CoqGym (Yang and Deng 2019), currently
serve as the primary datasets for theorem proving. These libraries, manually curated by humans,
include many intricate and profound proofs, such as the formal proofs of the Four-Color Theorem
(Gonthier et al. 2008), the Liquid Tensor Experiment (Scholze 2022), and Fermat’s Last Theorem
(Buzzard and Taylor 2024). Due to the labor-intensive nature of manual proof writing, these libraries
are relatively small, typically containing around 200,000 theorems. While they encompass a wide
range of mathematical fields, the number of theorems in specific areas is quite limited.
Synthetic theorems can provide large-scale datasets for learning-based theorem provers (Polu and
Sutskever 2020; Wu et al. 2020). However, these theorems are often of limited difficulty. Recently,
significant progress has been made in synthesizing geometry theorems (Trinh et al. 2024) using
neural theorem provers. In this paper, we develop AIPS for algebraic inequalities, which can
automatically and efficiently generate a large number of intricate theorems, with some reaching the
level of the International Mathematical Olympiad(IMO). These theorems will significantly improve
neural theorem proving methods.
**Search Strategy for Efficient Inference. Deep learning has achieved remarkable success in en-**
hancing search algorithms (Silver et al. 2016, 2017). Proof search in theorem proving, however, is
more challenging compared to self-play games like Go, as it may involve an infinite search space
within finite conditions. INT (Wu et al. 2020) incorporates Monte Carlo Tree Search (MCTS), while
HyperTree Proof Search (HTPS) (Lample et al.) employs online training to improve search strategy.
GPT-f (Polu and Sutskever 2020) learns a value network to guide backward search. Our AIPS
integrates the benefits of both HTPS and GPT-f, introducing a value curriculum learning strategy.
**3** **Algebraic Inequality Proving System**
**3.1** **Symbolic Deductive Engine for Algebra**
Interactive theorem provers, such as Lean, can verify mathematical operations but lack the ability
to perform automatic mathematical reasoning by combining computational rules. This challenge is
amplified in the automatic proof of algebraic inequalities, which often involves numerous calculations,
extensive transformation rules, and complex theorem matching. To address this, we design a symbolic
deductive engine for algebra, encompassing dozens of fundamental theorems and transformation
rules for algebraic inequalities. It integrates with the symbolic computation system SymPy [1], enabling
effective algebraic reasoning.
**3.1.1** **Representation for Algebraic Expressions and Theorems**
Algebraic expressions are represented symbolically with an underlying expression tree structure
as shown in Fig. 1. The basic computational rules include self-equivalence transformations of
inequalities and various built-in SymPy functions, such as combining fractions (sympy.together)
and expanding expressions (sympy.expand). Our deductive engine’s library also includes fundamental algebraic inequality theorems: the Arithmetic Mean-Geometric Mean Inequality (AM-GM),
the weighted AM-GM Inequality, Cauchy’s Inequality, Jensen’s Inequality, the discrete Hölder’s
Inequality, Schur’s Inequality, the binary and ternary Muirhead’s Theorem. Each inequality is represented as a category of theorem matching, containing variables, conditions, conclusions, and equality
conditions.
**3.1.2** **Pattern Matching for Inequality Theorems**
During symbolic reasoning, the system attempts to apply inequality theorems to a particular algebraic
expression or inequality, as shown in Fig. 1. When matching algebraic expressions with inequality
theorems, it first traverses the expression tree to determine how the value of the entire expression
changes as the node’s value increases, updating the node’s label accordingly. If the change cannot
be determined, no theorem matching is performed on the subtree of that node. After completing
1https://www.sympy.org/
-----
Figure 1: Examples of expression trees and pattern matching for the AM-GM inequality are illustrated.
In (a), for x, y, z ≥ 0, the value of _x+xyzy+z_ [decreases as][ x][ +][ y][ +][ z][ increases, so the label of the]
with respect to the root, e.g.,node x + y + z is −1. By applying the AM-GM inequality, we derive a series of upper bounds[(][xyz]3[)][2][/][3] and 2[√]xxyz(y+z) [. In (b), when traversing the expression tree of]
1 1 1
_a+b_ [+] _b+c_ [+] _c+a_ [, pattern matching for the AM-GM inequality at various nodes yields different types]
1 1 1 3
of bounds, such as the upper bound 2√ab [+] 2√bc [+] 2[√]ca [and the lower bound] ((a+b)(b+c)(c+a)) 13 [.]
the labeling, the system matches the next layer of determinable nodes with theorems. If a match
is successful, the matched sub-expression is replaced with the new expression obtained using the
theorem. Based on the previous labels, it then determines whether the entire expression increases
or decreases, thereby deriving a new inequality. For certain inequality theorems, such as Jensen’s
Inequality, pattern matching is particularly complex and time-consuming. Therefore, to improve the
efficiency of reasoning at each step, we have imposed time limits on the matching process for some
theorems.
**3.1.3** **Forward Reasoning**
Forward reasoning in theorem proving involves matching variables and conditions to a theorem and
deducing new conclusions. In our engine, new inequalities can be obtained by matching theorems to
both sides of an inequality or by applying self-equivalence transformation rules. If any two of the
resulting inequalities can be connected (e.g., applying a ≤ _b and b ≤_ _c to derive a ≤_ _c), the system_
continues to link them to form new inequalities. Therefore, our engine has the capability to perform
forward reasoning to generate large-scale data.
**3.2** **Olympiad-Level Inequalities Proof Set**
One of the main challenges in enabling learning-based models to solve complex mathematical
problems is the scarcity of large-scale, high-quality datasets. To overcome this obstacle, we develop
a theorem generator that effectively generates Olympiad-level inequality theorems by enhancing the
methods described in Section 3.1.3.
**3.2.1** **Synthetic Theorem Generation**
We randomly generate thousands of cyclically symmetric symbolic expressions, which serve as the
initial premises for our reasoning process. Utilizing 32 CPUs, we run Algorithm 1 for 8 hours,
resulting in the generation of 191,643 inequality theorems. The generated inequalities are stored in a
tree structure, with each node containing the necessary information for extracting proofs and training
machine learning models. Fig. 2 shows the procedure of generating a synthetic theorem in our AIPS,
and Fig. 3(a) shows the distribution of inference depths in the generated inequalities.
-----
Figure 2: An example of generating synthetic theorems in AIPS. When the initial premise _√a[2]_ + 2bc+
_√b[2]_ + 2ca + _√c[2]_ + 2ab successfully matches with Jensen’s inequality, a new inequality is generated.
By subsequently applying transformation rules and matching other fundamental inequalities, such
as the AM-GM inequality, the deductive engine incrementally generates new inequality theorems.
When an inequality theorem is applied, the system verifies whether the equality condition holds.
(a) Distribution of inference depths. In the
process of generating synthetic theorems, we
limit the reasoning steps. Unlike geometry
problem, long reasoning chains in inequality generation can lead to trivial theorems.
Solutions to challenging IMO inequalities
typically involve only two or three steps of
matching inequality theorems.
(b) Self-evolving process of AIPS. After pre-training on the initial synthetic dataset, AIPS is capable of proving some challenging theorems. Guided by the value network, it then attempts
to solve problems in an increasingly difficult filtered dataset.
By extracting nodes on the proof path as positive labels and
other nodes as negative labels, it fine-tunes the value network
and gradually improves proving performance in a curriculum
manner.
Figure 3: (a) Distribution of inference depths in our dataset. (b) Self-evolving process of AIPS.
**3.2.2** **Synthetic Theorem Evaluation**
To evaluate the quality of our dataset, we select 10 problems with reasoning lengths exceeding five
steps, and invite two National Mathematical Olympiad gold medalists and one silver medalist to
assess the difficulty and elegance of these problems. Their evaluations reveal that our dataset contains
a vast array of non-trivial theorems, some of which surpass the difficulty of inequalities found in early
IMO competitions. Notably, one inequality theorem from our dataset is selected for a major city’s
Mathematical Olympiad. All the 10 problems and evaluation results are provided in Appendix C.
-----
**Algorithm 1 Generating Theorems**
1: function Generate_Theorems(expression P, loops N )
2: Initialize Theorem Set S,
_Inequality Transformation Rules O, Inequality Sets A1, A2, A3_
3: Apply S to P to obtain a series of inequalities and add those whose equality conditions hold
to R
4: **for i ←** 1 to N do
5: **for each inequality ineq in R do**
6: Apply rules O to ineq to obtain A1
7: **end for**
8: **for each inequality ineq in R do**
9: Apply theorems S to one side of ineq and check if it can be linked to the original
inequality. If so, add it to A2
10: **end for**
11: **for each inequality ineq in A2 do**
12: Check if ineq meets the equality condition and add it to A3 if it does
13: **end for**
14: Update R by selecting M inequalities from the union of A3 and A1 according to the length
of inequalities
15: **end for**
16: **return R**
17: end function
Figure 4: Overview of how AIPS proves a simple theorem. At each step, the deductive engine
attempts to match inequality theorems with each side of the goal and applies all transformation rules
to the expression, resulting in a list of new subgoals. The searched goal is placed into a closed list,
ensuring that it will not be examined again. If one of the new subgoals is true, indicating that the
inequality holds, then the theorem is proved. Otherwise, the new subgoals are added to the open list,
along with other subgoals generated previously. A value network then evaluates all subgoals in the
open list, and the top-value one is chosen for the next iteration of proof search.
**3.3** **Neural Algebraic Inequality Prover**
By leveraging the capabilities of the deductive engine introduced in Section 3.1 and the Best-Firstsearch algorithm (Dechter and Pearl 1985), we develop an algebraic inequality prover. This prover
formulates the algebraic inequality proving as a sequential decision-making process by selecting
theorems to generate highly human-readable proofs. As shown in Fig. 4, given a goal and related
-----
conditions, AIPS first generates a list of subgoals by applying a set of theorems at each iteration. A
value neural network is then used to evaluate these newly generated subgoals along with the previous
unresolved subgoals. The top-value subgoal is selected for the next step of reasoning. This iterative
process continues until the proof is successfully completed, as shown in Fig. 3(b).
**3.3.1** **Searching Proofs by Combining Value Network with Symbolic Prover**
The procedure of searching for inequality proofs is generally divided into three parts: mixed reasoning
for subgoal generation, evaluation, and planning.
**Subgoal Generation. There are two methods for generating subgoals in AIPS. The first method**
involves applying fundamental inequality theorems. Let X be the set of variables. Suppose the
inequality theorem to prove is u(X) ≤ _v(X) under a condition set P. AIPS first homogenizes the_
inequality to f (X) ≤ _g(X) on both sides by applying conditions in P. Then, by applying theorems_
to the left-hand side of the target inequality, AIPS generates a series of new inequalities:
_f_ (X) _h1(X), . . ., f_ (X) _hn(X)_
_≤_ _≤_
This results in subgoals hi(X) _g(X). Similarly, by applying theorems to the right-hand side, AIPS_
_≤_
also generates subgoals f (X) _sj(X). The second method involves applying transformation rules_
_≤_
such as sympy.expand and sympy.apart to the goal, generating subgoals that are equivalent to the
original inequality.
**Evaluation. AIPS employs a value function Vθ to assess the difficulty of each inequality. Formally,**
we have a function f parameterized by η that encodes the inequality expression s. The encoded
embedding vector fη(s) is then fed into a deep neural network gϕ, which outputs a value in the
interval [0,1]. We choose f to be a transformer encoder with average pooling (Vaswani et al. 2017).
**Planning. With the evaluation function Vθ, we use the Best-First search algorithm for planning. We**
also test the performance of Monte-Carlo Tree Search (MCTS) algorithm, where the result is less
satisfactory. There are two primary reasons for this. First, the action space for each state is extremely
large, leading to explosive growth of the MCTS searching tree. Second, the high cost of reasoning
steps makes the simulation step in MCTS nearly impractical, often exceeding time limits.
We also note that our prover can be combined with any heuristic function, and thus design various
baselines in our experiments.
**3.3.2** **Pre-training Value Network Using a Heuristic Function**
We define the tree-depth D of an inequality as the maximum depth of the expression trees on both
sides. Proving an algebraic inequality is equivalent to reducing the tree-depth of the inequality to one.
We use D as the supervision information to train initial heuristic function finit in the Best-First search
algorithm. That is to say, we pre-train a value network Vθ as finit on the synthetic dataset by utilizing
the tree-depth D.
**3.3.3** **Fine-tuning Value Network on Filtered Synthetic Data**
We create a new dataset by removing all inequalities with inference depth less than 4. We then
randomly sample 1,200 problems and sort them by tree-depth in ascending order. For inequalities
with the same tree-depth, they are sorted by the length of their string representation, with shorter
lengths placed first.
The fine-tuning procedure involves sequentially proving these inequalities and updating the parameters
of the value network. If an inequality is successfully proved, we record the set of subgoals on the
proof path as T and the set of subgoals that are searched but not on the proof path as F . The values
of the elements in T are scaled down by a factor of ϵ, while the values of the elements in F are
increased. Using these labels, we perform a training round on the value network Vθ, and then proceed
to the next problem. This iterative process is used to adjust the network parameters. See Appendix
B.2 for more details.
-----
**4** **Experiments**
We evaluate AIPS on an Olympiad-level algebraic inequality problem test set. It outperforms the stateof-the-art methods in terms of the number of solved problems, demonstrating the strong algebraic
intuitions developed by the learned value network.
**4.1** **An Olympiad-Level Inequality Benchmark**
Current benchmarks for Olympiad-level math problems, such as miniF2F (Zheng et al. 2021) and
Fimo (Liu et al. 2023), cover a wide array of topics but often lack a dedicated section for algebraic
inequalities. In inequality benchmarks like INT (Wu et al. 2020), the problems are typically of limited
difficulty. To address this gap, we collect all ternary and quaternary algebraic inequality problems
from IMO since 1990. Additionally, we include challenging problems from IMO shortlists and
various national mathematical Olympiads, such as the USAMO, the USA National Team Selection
Tests, and the Polish, Japanese, and Korean Mathematical Olympiads, all of which are of comparable
difficulty to the IMO. In total, we compile 20 problems for our test set, naming it MO-INT-20
(Math-Olympiad-INequality-Test-20). All problems are checked to ensure they are not in AIPS’s
training datasets. We also translate the test problems into Lean for subsequent experiments.
**4.2** **Comparison Methods**
Current theorem provers include interactive theorem provers, large language models capable of
generating natural language proofs, and neural symbolic theorem provers. We compare LeanCopilot
(Song et al. 2024), the open-source state-of-the-art interactive theorem prover in Lean. Additionally,
we evaluate general large language models like GPT-4, GPT-4 Turbo and Gemini 1.5 Pro, as well as
the math-specific language model Llemma-7b (Azerbayev et al. 2023). For neural symbolic theorem
provers, we design various baselines, including our deductive engine paired with breadth-first search
and MCTS, our deductive engine equipped with tree-depth in Section 3.3.2 or LLM heuristics as the
value function, and our AIPS with only pretrained value network.
It should be noted that we cannot compare with several existing interactive theorem provers (Polu and
Sutskever 2020; Polu et al. 2022) since these provers are not open source to be reproduced. However,
it is reported that these provers can only prove a few early Olympiad inequalities, as detailed in the
appendix of their respective papers.
**4.3** **Comparison Results and Analysis**
We test 11 different provers on the inequalities in MO-INT-20, with each problem limited to 90 minutes of solving time, consistent with the standard problem-solving time in the IMO. The comparison
results are shown in Table 1. It can be seen that our AIPS achieves the best performance and solves
10 out of 20 problems.
Table 1: Model Performances on the MO-INT-20. DE denotes our deductive engine. BFS and
MCTS are Breadth-First Search and Monte Carlo Tree Search, respectively.
|Model Category|Model|Problems Solved (20)|
|---|---|---|
|Large Language Models|Gemini 1.5 Pro|1|
||GPT-4|0|
||GPT-4 Turbo|0|
||Llemma-7b|0|
|Interactive Theorem Provers|LeanCopilot (LeanDojo)|0|
|Neural-Symbolic Provers|DE + GPT-4 Turbo’s heuristics|6|
||DE + BFS|4|
||DE + MCTS|5|
||DE + tree-depth heuristic function|7|
||AIPS with pretrained value network|7|
||AIPS|10|
**Model Category** **Model** **Problems Solved (20)**
Gemini 1.5 Pro 1
Large Language Models GPT-4 0
GPT-4 Turbo 0
Llemma-7b 0
Interactive Theorem Provers LeanCopilot (LeanDojo) 0
DE + GPT-4 Turbo’s heuristics 6
Neural-Symbolic Provers DE + BFS 4
DE + MCTS 5
DE + tree-depth heuristic function 7
AIPS with pretrained value network 7
AIPS 10
**Analysis of Large Language Models’ Performance. Large language models like GPT-4 have**
demonstrated remarkable reasoning abilities (Lewkowycz et al. 2022; Wei et al. 2022). However, in
-----
this test, only one of the four models, Gemini 1.5 Pro, successfully generates a fully correct natural
language proof. When solving problems, large language models tend to either make trivial mistakes
or indicate that they do not know how to solve them, despite the potential contamination of their
training data by online proofs. These results reveal their limited math reasoning ability.
**Analysis on a Formal Theorem Prover’s Performance. Recent studies reveal the capabilities of**
neural theorem provers based on Interactive Theorem Prover (ITP) frameworks (Yang et al. 2024;
Rute et al. 2024). These systems generally convert theorem proving into code completion tasks. We
evaluate the performance of one such theorem prover, LeanCopilot (Song et al. 2024), developed
from LeanDojo, on our test set. LeanCopilot is the current open-source state-of-the-art theorem
prover based on Lean. The results indicate its limited ability to solve complex algebraic problems:
None of the problems are solved through proof search in LeanCopilot. Additional tests on tactic
suggestions (see Appendix B.5.3) show that current formal theorem provers struggle to predict the
complex premises required for proving inequalities.
**Analysis on Neural Symbolic Provers’ Performance. In this test, neural symbolic provers demon-**
strate a strong ability to prove algebraic inequalities using best-first search algorithm. By applying
either breadth-first search or MCTS algorithm, our deductive engine successfully solves four and
five problems, respectively. We also test performance under the guidance of a tree-depth heuristic
function and a pre-trained value network using the best-first search algorithm, both of which solve
seven problems. Additionally, we prompt GPT-4 Turbo and find it exhibit some algebraic intuition,
successfully guiding the deductive engine to solve six problems—two more than the breadth-first
search. However, it is worth noting that large language models (LLMs) may occasionally prioritize
lengthy and meaningless subgoals. Due to the exponential growth of the number of new inequalities
as the width and height of the expression trees increase, it can result in expression strings longer
than the LLMs’ input context length. For example in problem 4 from the 2014 Japan Mathematical
Olympiad, it chooses a very long subgoal at iteration 2, resulting in subgoals at the next iteration
being three times longer than its input context length.
Finally, following a curriculum learning strategy on 1,000 inequality problems, AIPS achieves the
best performance, solving 10 out of 20 problems. Among the 10 problems from the IMO or IMO
shortlist, it successfully solves five, reaching the average level of IMO contestants. We also test the
performances of AIPS after 200, 400, 600, and 800 loops of fine-tuning value network (see Appendix
B.3). The results demonstrate that our value curriculum learning strategy is very effective, with the
number of proof search steps significantly decreasing during the training process, and the number of
solved problems increasing to 10 ultimately.
**5** **Conclusion**
In conclusion, solving Olympiad-level mathematical problems is a significant milestone in machine
intelligence and automated reasoning. The lack of large-scale, high-quality datasets presents a
challenge, particularly in algebraic systems. To address this, we propose AIPS, an Algebraic Inequality
_Proving System, which autonomously generates complex inequality theorems and effectively solves_
Olympiad-level inequality problems without human input. Utilizing a value curriculum learning
strategy, AIPS demonstrated strong mathematical intuition by solving 10 out of 20 International
Mathematical Olympiad-level problems. One of these theorems was selected for a major city’s 2024
Mathematical Olympiad.
In the future, by incorporating more fundamental theorems and operational rules, our AIPS could
solve even more complex problems, discover a greater number of non-trivial theorems, and assist
mathematicians in solving modern mathematical challenges. However, it currently lacks the ability to
autonomously propose and comprehend new definitions. Instead, it relies on handwritten theorems
and matching rules, which is time-consuming. Addressing this limitation is a crucial area for future
research.
-----
**6** **Acknowledgements**
We extend our heartfelt gratitude to the three distinguished contestants—two National Mathematical
Olympiad gold medalists and one silver medalist—for their invaluable evaluations of our synthetic
theorems. We also express our sincere thanks to their coach Zhibin Liang, whose efforts made this
collaboration possible. Furthermore, we deeply appreciate the insightful discussions from Jiajun
Song, Yuxuan Wang, and Dr. Chi Zhang at Beijing Institute for General Artificial Intelligence.
**References**
Judea Pearl. Graphical models for probabilistic and causal reasoning. Quantified representation of
_uncertainty and imprecision, pages 367–389, 1998._
W-T Wu. On the decision problem and the mechanization of theorem proving in elementary geometry.
_Scientia Sinica, 21:157–179, 1978._
Shang-Ching Chou, Xiao-Shan Gao, and Jing-Zhong Zhang. A deductive database approach to
automated geometry theorem proving and discovering. Journal of Automated Reasoning, 25(3):
219–246, 2000.
Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry
without human demonstrations. Nature, 625(7995):476–482, 2024.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve
math word problems, 2021. URL https://arxiv. org/abs/2110.14168, 2021.
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical report.
_arXiv preprint arXiv:2303.08774, 2023._
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_arXiv preprint arXiv:2009.03393, 2020._
Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark for
formal olympiad-level mathematics. arXiv preprint arXiv:2109.00110, 2021.
Yuhuai Wu, Albert Qiaochu Jiang, Jimmy Ba, and Roger Grosse. Int: An inequality benchmark for
evaluating generalization in theorem proving. arXiv preprint arXiv:2007.02924, 2020.
Hongming Zhang and Tianyang Yu. Alphazero. Deep Reinforcement Learning: Fundamentals,
_Research and Applications, pages 391–415, 2020._
Guillaume Lample, Marie-Anne Lachaux, Thibaut Lavril, Xavier Martinet, Amaury Hayat, Gabriel
Ebner, Aurélien Rodriguez, and Timothée Lacroix. Hypertree proof search for neural theorem
proving (2022). URL https://arxiv. org/abs/2205.11491.
Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya
Sutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344,
2022.
Kaiyu Yang, Aidan Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil, Ryan J
Prenger, and Animashree Anandkumar. Leandojo: Theorem proving with retrieval-augmented
language models. Advances in Neural Information Processing Systems, 36, 2024.
Peiyang Song, Kaiyu Yang, and Anima Anandkumar. Towards large language models as copilots for
theorem proving in lean. arXiv preprint arXiv:2404.12534, 2024.
John Harrison, Josef Urban, and Freek Wiedijk. History of interactive theorem proving. In Handbook
_of the History of Logic, volume 9, pages 135–214. Elsevier, 2014._
-----
Leonardo De Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, and Jakob von Raumer. The
lean theorem prover (system description). In Automated Deduction-CADE-25: 25th International
_Conference on Automated Deduction, Berlin, Germany, August 1-7, 2015, Proceedings 25, pages_
378–388. Springer, 2015.
Bruno Barras, Samuel Boutin, Cristina Cornes, Judicaël Courant, Yann Coscoy, David Delahaye,
Daniel de Rauglaudre, Jean-Christophe Filliâtre, Eduardo Giménez, Hugo Herbelin, et al. The coq
proof assistant reference manual. INRIA, version, 6(11), 1999.
Tobias Nipkow, Markus Wenzel, and Lawrence C Paulson. Isabelle/HOL: a proof assistant for
_higher-order logic. Springer, 2002._
Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C Paulson. Isarstep: a benchmark for high-level
mathematical reasoning. arXiv preprint arXiv:2006.09265, 2020.
Floris van Doorn, Gabriel Ebner, and Robert Y Lewis. Maintaining a library of formal mathematics.
In International Conference on Intelligent Computer Mathematics, pages 251–267. Springer, 2020.
Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In
_International Conference on Machine Learning, pages 6984–6994. PMLR, 2019._
Georges Gonthier et al. Formal proof–the four-color theorem. Notices of the AMS, 55(11):1382–1393,
2008.
Peter Scholze. Liquid tensor experiment. Experimental Mathematics, 31(2):349–354, 2022.
Kevin Buzzard and Richard Taylor. A lean proof of fermat’s last theorem. 2024.
David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche,
Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering
the game of go with deep neural networks and tree search. nature, 529(7587):484–489, 2016.
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez,
Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, et al. Mastering the game of go without
human knowledge. nature, 550(7676):354–359, 2017.
Rina Dechter and Judea Pearl. Generalized best-first search strategies and the optimality of a. Journal
_of the ACM (JACM), 32(3):505–536, 1985._
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing
_systems, 30, 2017._
Chengwu Liu, Jianhao Shen, Huajian Xin, Zhengying Liu, Ye Yuan, Haiming Wang, Wei Ju,
Chuanyang Zheng, Yichun Yin, Lin Li, et al. Fimo: A challenge formal dataset for automated
theorem proving. arXiv preprint arXiv:2309.04295, 2023.
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q
Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model for
mathematics. arXiv preprint arXiv:2310.10631, 2023.
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving quantitative
reasoning problems with language models. Advances in Neural Information Processing Systems,
35:3843–3857, 2022.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
_neural information processing systems, 35:24824–24837, 2022._
Jason Rute, Miroslav Olšák, Lasse Blaauwbroek, Fidel Ivan Schaposnik Massolo, Jelle Piepenbrock,
and Vasily Pestun. Graph2tac: Learning hierarchical representations of math concepts in theorem
proving. arXiv preprint arXiv:2401.02949, 2024.
-----
**Appendix**
**A Technical Details of the Deductive Engine** **13**
A.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
A.2 Theorems, Rules and Pattern Matching . . . . . . . . . . . . . . . . . . . . . . . . 14
A.3 Details of Synthetic Data Generation . . . . . . . . . . . . . . . . . . . . . . . . . 15
**B** **Experiments and Analysis** **16**
B.1 Synthetic Dataset Statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
B.2 Details of Value Curriculum Learning . . . . . . . . . . . . . . . . . . . . . . . . 16
B.3 Performance Analysis During Curriculum Learning . . . . . . . . . . . . . . . . . 16
B.4 Our Benchmark: Mathematical-Olympiad-INequality-Test-20 . . . . . . . . . . . 17
B.5 Details of Comparison Methods and Testing Results . . . . . . . . . . . . . . . . . 20
**C Human Evaluation of Generated Synthetic Theorems** **32**
C.1 10 Synthetic Theorems and 4 Comparison IMO Problems . . . . . . . . . . . . . . 32
C.2 Human Evaluation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
C.3 Synthetic Theorem Selected for Mathematical Olympiad . . . . . . . . . . . . . . 34
-----
**A** **Technical Details of the Deductive Engine**
We provide more information on AIPS’ deductive engine and the training process for the value
network. To highlight the reasoning ability and maintain readability of proofs, we avoid using
brute-force methods such as augmentation-substitution and Wu’s method Wu (1978).
**A.1** **Background**
**A.1.1** **Basic Knowledge in Theorem Proving**
Theorem proving encompasses two types of reasoning: forward reasoning and backward reasoning.
Forward reasoning involves identifying a pattern match between a particular theorem and the given
conditions along with the universal variables, then deducing the conclusion. In contrast, backward
reasoning works in the opposite direction, where the conclusion and variables are matched with
a specific theorem, breaking down the main goal into smaller, more manageable subgoals. Both
methods are essential in constructing and navigating the logical steps to establish the validity of
complex mathematical theorems, as shown in Fig. 5.
Figure 5: Two examples of forward reasoning on the left and backward reasoning on the right.
**A.1.2** **Challenges in Algebraic Reasoning**
There are two main challenges in reasoning within algebraic systems. The first is the infinite reasoning
space within finite conditions, caused by the numerous possible expression trees and the vast search
space for premises. This contrasts with solving Euclidean geometry problems, where a deduction
fixed point exists with respect to a set of geometric rules or axioms. To address this issue, we
consider only the current expression tree at each step of reasoning. The second challenge lies in
pattern matching, which requires accurately identifying and applying relevant theorems to given
sub-structures. For theorems with function-type variables, like Jensen’s Inequality, pattern matching is
more challenging and time-consuming. We provide heuristic functions to identify possible structures
where Jensen’s Inequality can be applied.
-----
**A.2** **Theorems, Rules and Pattern Matching**
**A.2.1** **Theorems, Methods and Transformation Rules**
Our deductive engine incorporates six well-known inequality theorems frequently used in mathematical Olympiads, several one-variable inequality scaling and solving methods, and dozens of
algebraic transformation rules. The inequality theorems include the Arithmetic Mean-Geometric
**Mean (AM-GM) inequality, the weighted AM-GM inequality, Hölder’s inequality, Jensen’s**
**inequality, Schur’s inequality, and Müirhead’s theorem. For simplicity, we have excluded some**
theorems that can be directly proved using these inequalities, such as the Geometric Mean-Harmonic
Mean (GM-HM) inequality and the Cauchy-Schwarz inequality.
Here we list some frequently used methods and transformation rules:
- nodiv_expr: Multiply both sides to eliminate denominators
- nomul_expr: Divide both sides by all factors
- no_sep_denom: Combine fractions on both sides
- sep_neg: Move terms with negative coefficients to the other side of the inequality
- zero_side: Subtract one side from the other to make one side equal to zero
- no_pow: Remove roots at the second level from the top of the expression tree on both sides
- try_together_l, try_together_r: Combine fractions on the left or right side
- try_expand_l, try_expand_r: Expand expressions on the left or right side
- all_cyc_mul_expr: Multiply both sides by a cyclically symmetric polynomial, with one
of its generators on either the left or right side of the inequality (a generator is a term that,
when cyclically permuted, generates the expression)
- try_factor_both: Factorize both sides
- check_one_var: Check if the solution of a one-variable inequality is contained in a given
interval
- check_linear_ctr: Check if a one-variable expression can be applied with tangent line
trick
- find_main_fun: For a cyclically symmetric expression, try to find a function that can
match with Jensen’s inequality as well as generate this expression
**A.2.2** **Pattern Matching**
An important step in generating synthetic theorems is matching algebraic expressions with these
theorems. We use the AM-GM inequality as an example to illustrate pattern matching method as
follows.
**Theorem 1. (AM-GM) For non-negative real numbers a1, a2, . . ., an,**
_a1 + a2 + · · · + an ≥_ _n_ _√[n]_ _a1a2 · · · an_
_with equality if and only if a1 = a2 = · · · = an._
Assuming all variables are non-negative, pattern matching for an algebraic expression with the
AM-GM inequality (on the Left-Hand-Side) is explained in three steps:
1. Traverse through the expression tree, and label a node with 1 if the whole expression value
increases as the value of the node increases, with −1 if the expression value decreases as
the value of the node increases, and with None if this cannot be determined.
2. At each node labeled 1 or −1 and calculated with an Add operation, find all non-negative
sub-arguments of the node’s expression and place them in nonneg_set. Similarly, find all
non-positive sub-arguments and place them in nonpos_set.
3. For the obtained sets nonneg_set and nonpos_set, we use the following method to match the
mean inequalities:
- Arbitrarily partition each set into multiple subsets.
-----
- the sum of the elements in each subset can be used as a variable to match the left side
of the mean inequality.
- If a subset does not contribute to the inequality, it is excluded from the partition.
This process allows us to identify all possible mean inequalities that can be matched. We
then replace the original sub-expressions in the expression tree with the transformed ones
based on the matched inequalities. By doing so, a new inequality is derived according to the
labels.
**A.3** **Details of Synthetic Data Generation**
Olympiad inequalities aim for not only difficulty but also conciseness and elegance, a principle
also valued in modern mathematics. Although our deductive engine can generate various types
of inequalities, we focus on cyclically symmetric inequalities in semi-definite systems that can be
generated with a limited number of steps to avoid lengthy and chaotic expressions.
Initially, we generate thousands of premises as the starting points for data generation using Algorithm
2. For each generated premise, we run the data-generation algorithm described in the main paper.
During this process, we discard inequalities for which equality does not hold or which do not have
the desired form, and halt the generation after a maximum of 25 iterations of search. Utilizing 32
CPUs over an 8-hour period, the deductive engine produces 191,643 theorems. This demonstrates the
engine’s ability to efficiently generate a large number of high-quality inequality theorems, thereby
addressing the bottleneck of lacking a high-quality dataset for learning-based provers.
**Algorithm 2 Generating Initial Premises**
**function GENERATE_EXPRESSIONS(variable_list I, loop_limit N** )
Initialize Results and Basic_Operations
**for i ←** 1 to N do
Initialize New_Expressions
**for each pair (a, b) in I and each operation f in Basic_Operations do**
Add f (a, b) to New_Expressions
**end for**
Add New_Expressions to I
**end for**
**for each expression expr in I do**
Add cyclic summation of expr to Results
**end for**
**return Results**
**end function**
-----
**B** **Experiments and Analysis**
In this section, we provide details of our experiments and present the test results. We also include
technical analysis of these results.
**B.1** **Synthetic Dataset Statistics**
We conduct a statistical analysis on the synthetic dataset, focusing on inequality lengths (in string
representation) and tree-depth (the maximum expression tree height on both sides of an inequality),
as depicted in Figure 6. The distributions of lengths and tree-depth are related to the difficulty and
search complexity. These distributions illustrate that our theorems range from simple to complex,
reflecting a spectrum of difficulty levels in our dataset.
Figure 6: Ditribution of lengths and tree-depths of synthetic theorems.
**B.2** **Details of Value Curriculum Learning**
The value network Vθ comprises two main components: the pre-trained transformer encoder, Llemma7b (Azerbayev et al. 2023), followed by a 4096 × 256 × 1 multilayer perceptron that outputs a value
in the interval (0, 1). Initially, AIPS successfully resolves 7 out of 20 problems from the test set using
the pre-trained value network.
The value network Vθ functions as the heuristic in the best-first-search algorithm. It comprises two
main components: the pre-trained transformer encoder, Llemma-7b, followed by a 4096 × 256 × 1
feedforward neural network that outputs a value in the interval (0, 1). Initially, AIPS successfully
resolves 7 out of 20 problems from the test set using the pre-trained value network.
The procedure of value curriculum learning is as follows. After successfully proving a theorem, each
node along the proof path is relabeled with a value that is ϵ times its original value. For node that
has been searched but is not part of the proof path, if its original label is v, the label of this node is
updated at the end of this curriculum learning round according to the formula: max(m, v) _×_ _η +1_ _−_ _η._
Here m represents the maximum value after modification among the proof path nodes. Subsequently,
the relabeled nodes undergo 10 loops of fine-tuning training. We choose ϵ = 0.3 and η = 0.7.
Before the value curriculum learning process, we randomly select 1,200 theorems from the synthetic
dataset, excluding theorems with an inference depth of less than 4. These theorems undergo a
curriculum learning strategy tailored for the pre-trained model. We limit the time for solving each
problem to 40 minites. During curriculum learning, the theorems are solved and trained in an
ascending order, sorted first by tree-depth, then by theorem length. The first 150 problems are solved
within a mere two hours. After four days of training, AIPS solves 892 out of the first 1,000 problems,
with 887 successes in the first 950 theorems. Since it struggles to solve problems after the 950th
theorem, we decide to halt the training process at the 1,000th problem.
**B.3** **Performance Analysis During Curriculum Learning**
The extensive experiments verify that the value curriculum learning strategy is very effective. The
number of search loops required to solve testing theorems decreases noticeably throughout the
-----
training process, enabling AIPS to successfully solve 10 out of 20 IMO-level inequality problems
using an RTX-4090 GPU and a single CPU. Fig. 7 shows the decreasing number of search loops
during curriculum learning on the 2001 IMO Problem 2, and Fig. 8 shows the increasing number of
solved problems during curriculum learning.
Figure 7: AIPS progressively finds the proof path more efficiently throughout the training process.
Figure 8: AIPS solves more problems with the increasing iterations of value curriculum learning.
**B.4** **Our Benchmark: Mathematical-Olympiad-INequality-Test-20**
We collect all ternary and quaternary algebraic inequality problems from IMO since 1990, some
challenging problems from IMO shortlists and several national mathematical Olympiads, such as
-----
the USAMO, the USA National Team Selection Tests, the Polish/Korean/Japanese Mathematical
Olympiad, all of which are of comparable difficulty to the IMO. The collected 20 problems provide
a new challenging benchmark for the realm of automatic theorem proving, dubbed as MO-INT-20
(Math-Olympiad-INequality-Test-20). The details of these 20 problems are as follows.
- Problem 1 (IMO 1990 Shortlist):
For a > 0, b > 0, c > 0, d > 0 such that a · b + b · c + c · d + d · a = 1, show that:
_a[3]_ _b[3]_ _c[3]_ _d[3]_
_b + c + d_ [+] _c + d + a_ [+] _d + a + b_ [+] _a + b + c_ 3
_[≥]_ [1]
- Problem 2 (IMO 1993 Shortlist):
For a > 0, b > 0, c > 0, d > 0, show that:
_d_
_a + 2b + 3c_ 3
_[≥]_ [2]
_b + 2c + 3d_ [+]
3a + c + 2d [+]
2a + 3b + d [+]
- Problem 3 (IMO 1995 P2):
For a > 0, b > 0, c > 0 such that a · b · c = 1, show that:
1 1 1
_c[3](a + b) [+]_ _b[3](a + c) [+]_ _a[3](b + c)_ 2
_[≥]_ [3]
- Problem 4 (IMO 1996 Shortlist):
For a > 0, b > 0, c > 0 such that a · b · c = 1, show that:
_a · b_ _a · c_ _b · c_
_a[5]_ + a _b + b[5][ +]_ _a[5]_ + a _c + c[5][ +]_ _b[5]_ + b _c + c[5][ ≤]_ [1]
_·_ _·_ _·_
- Problem 5 (USAMO 1997 P5):
For a > 0, b > 0, c > 0, show that:
1 1 1
_a[3]_ + b[3] + a · b · c [+] _b[3]_ + c[3] + a · b · c [+] _c[3]_ + a[3] + a · b · c _[≤]_
- Problem 6 (IMO 1998 Shortlist A3):
For a > 0, b > 0, c > 0 such that a · b · c = 1, show that:
_a[3]_ _b[3]_ _c[3]_
(1 + b)(1 + c) [+] (1 + c)(1 + a) [+] (1 + a)(1 + b) 4
_[≥]_ [3]
_a · b · c_
- Problem 7 (IMO 2000 P2):
For a > 0, b > 0, c > 0 such that a · b · c = 1, show that:
(a 1 + [1]
_−_ _b_ [)(][b][ −] [1 + 1]c [)(][c][ −] [1 + 1]a [)][ ≤] [1]
- Problem 8 (IMO 2001 P2):
For a > 0, b > 0, c > 0, show that:
_a_
_√a[2]_ + 8bc
- Problem 9 (USAMO 2003 P5):
For a > 0, b > 0, c > 0, show that:
8ac + b[2][ +]
8ab + c[2][ ≥] [1]
(a + b + 2c)[2]
2c[2] + (a + b)[2][ + (]2b[a][2][ + 2]+ ([b]a[ +] +[ c] c[)])[2][2][ + (2]2a[2][a][ +]+ ([ b]b[ +] +[ c] c[)])[2][2][ ≤] [8]
- Problem 10 (Poland 2004):
For a > 0, b > 0, c > 0, d > 0, show that:
1
(a[3] + 63bcd) 3 [+]
1
(63acd + b[3]) 3 [+]
1
(63abd + c[3]) 3 [+]
1
(63abc + d[3]) 3
_[≥]_ [1]
-----
- Problem 11 (IMO 2004 Shortlist A5):
For a > 0, b > 0, c > 0 such that a · b + b · c + c · a = 1, show that:
[1] [1] [1]
1 3 1 3 1 3 1
+ +
_a_ [+ 6][b] _b_ [+ 6][c] _c_ [+ 6][a] _≤_ _a_ _b_ _c_
_·_ _·_
- Problem 12 (IMO 2006 P3):
Given real numbers a, b, c, show that:
9
_|ab(a[2]_ _−_ _b[2]) + bc(b[2]_ _−_ _c[2]) + ca(c[2]_ _−_ _a[2]))| ≤_ 16√2 [(][a][2][ +][ b][2][ +][ c][2][)][2]
- Problem 13 (IMO 2009 Shortlist):
For a > 0, b > 0, c > 0 such that _a[1]_ [+][ 1]b [+][ 1]c [=][ a][ +][ b][ +][ c][, show that:]
(2a + b + c)[−][2] + (a + 2b + c)[−][2] + (a + b + 2c)[−][2]
_≤_ 16[3]
- Problem 14 (USA IMO Team Selection 2010 P2):
For a > 0, b > 0, c > 0 such that a · b · c = 1, show that:
1
_a[5](b + 2c)[2][ ≥]_ 3[1]
_c[5](a + 2b)[2][ +]_
_b[5](2a + c)[2][ +]_
- Problem 15 (USAMO 2011 P1):
For a > 0, b > 0, c > 0 such that a[2] + b[2] + c[2] + (a + b + c)[2] _≤_ 4, show that:
_a · b + 1_
(a + b)[2][ +][ b](b[ ·] +[ c][ + 1] c)[2][ +][ c](c[ ·] +[ a][ + 1] a)[2][ ≥] [3]
- Problem 16 (Korea 2011 P4):
For a ≥ 0, b ≥ 0, c ≥ 0 such that a + b + c = 1, show that:
1
_c[2]_ _−_ 4c + 9 _[≤]_ 18[7]
_a[2]_ 4a + 9 [+]
_−_
- Problem 17 (USAMO 2012):
For a > 0, b > 0, c > 0, show that:
_b[2]_ 4b + 9 [+]
_−_
_b[3]_ + 3c[3]
5b + c [+][ a]5[3][ + 3]a + b[b][3] [+ 3]a[a] + 5[3][ +][ c]c[3] _≥_ 3[2] [(][a][2][ +][ b][2][ +][ c][2][)]
- Problem 18 (Japan 2014 P5):
For a ≥ 0, b ≥ 0, c ≥ 0 such that a + b + c = 1, show that:
_c_
9ab + 4(a − _b)[2]_ + 1 _[≥]_ 2[1]
9bc + 4(b _c)[2]_ + 1 [+]
_−_
9ac + 4( _a + c)[2]_ + 1 [+]
_−_
- Problem 19 (USAMO 2017 P6):
For a ≥ 0, b ≥ 0, c ≥ 0, d ≥ 0 such that a + b + c + d = 4, show that:
_d_
_a[3]_ + 4 3
_[≥]_ [2]
_b[3]_ + 4 [+]
_c[3]_ + 4 [+]
_d[3]_ + 4 [+]
- Problem 20 (IMO 2020 P2):
For a ≥ _b, b ≥_ _c, c ≥_ _d, d > 0 such that a + b + c + d = 1, show that:_
(a + 2b + 3c + 4d)a[a]b[b]c[c]d[d] _< 1_
-----
**B.5** **Details of Comparison Methods and Testing Results**
**B.5.1** **Monte-Carlo Tree Search**
We evaluate the performance of Monte-Carlo Tree Search (MCTS). Compared to games like Go or
chess, theorem proving can have an extremely large or even infinite action space, since applying each
theorem or axiom usually comes with a set of parameters. Therefore, a direct application of MCTS to
our problems is infeasible. To address this, we need to modify the MCTS algorithm.
First, we place a restriction on our action space: at each state, we sample all possible actions
generated from the current proof state, then sort them according to a tree-depth heuristic function,
which evaluates the difficulty of the proof state after applying them, and pick the first k proof states
(we choose k = 5). During the selection step in MCTS, we apply the Upper Confidence Bounds
algorithm,
ln(N )
SelectedAction = Argmaxi vi + C · s _ni_
Here, ni is the number of visits of the i-th action (proof state), _N is the number of visits of the current_
state, C is an adjustable hyperparameter (we choose C = 0.3√2), and vi is the value of the state
after applying the i-th action.
We use two methods to compute vi: either it is directly given by the heuristic evaluation function, or
we execute a best-first search to a constant depth d (d = 2), and take the value of the current best
state as vi.
**B.5.2** **Testing Examples from Large Language Models**
Large Language Models (LLMs) currently struggle to solve IMO-level inequalities through the
generation of natural language proofs. In our evaluation of 4 LLMs—GPT-4, GPT-4 Turbo, Gemini
1.5 Pro, and Llemma-7b—only Gemini 1.5 Pro successfully generates a correct solution to USAMO1997-P5. Proofs generated by GPT-4 and GPT-4 Turbo contain some logical errors, while Llemma-7b
rarely produces meaningful reasoning sentences. We present the results of first three LLMs below,
including the only correct proof by Gemini 1.5 Pro to USAMO-1997-P5, and the incorrect proofs by
using these three LLMs to IMO-1995-P2.
ln(N )
_ni_
SelectedAction = Argmaxi
-----
**Statement: (USAMO 1997 P5) Given three positive real numbers a, b, c, show that**
1 1 1 1
_a[3]_ + b[3] + abc [+] _b[3]_ + c[3] + abc [+] _c[3]_ + a[3] + abc _abc_
_[≤]_
**Correct proof:**
1. Factorization: Recall the factorization of the sum of cubes:
_x[3]_ + y[3] = (x + y)(x[2] _−_ _xy + y[2])_
2. Apply AM-HM: For non-negative numbers x and y, the AM-HM inequality states:
_x + y_ 2
2 _≥_ 1/x + 1/y
Rearranging the inequality:
_x[2]_ + y[2] _−_ _xy ≥_ _xy_
3. Manipulate the given inequality: Lets focus on the first term and use the factorization:
1 1
_a[3]_ + b[3] + abc [=] (a + b)(a[2] _−_ _ab + b[2]) + abc_
Since a[2] _−_ _ab + b[2]_ _≥_ _ab (from step 2), we have:_
1 1 1
(a + b)(a[2] _−_ _ab + b[2]) + abc_ _[≤]_ (a + b)ab + abc [=] (a + b + c)ab
4. Apply similar reasoning to other terms: Similarly, for the second and third terms:
1 1
_b[3]_ + c[3] + abc (a + b + c)bc
_[≤]_
1 1
_c[3]_ + a[3] + abc (a + b + c)ca
_[≤]_
5. Combine the inequalities: Adding the three inequalities we just derived and simplify the right
hand side:
1 1 1 1
_a[3]_ + b[3] + abc [+] _b[3]_ + c[3] + abc [+] _c[3]_ + a[3] + abc _abc_
_[≤]_
This completes the proof.
Figure 9: Correct proof generated by Gemini 1.5 Pro.
-----
**Statement: (IMO 1995 P2) Given positive real numbers a, b, c such that abc = 1, show that**
1 1 1
_c[3](a + b) [+]_ _b[3](a + c) [+]_ _a[3](b + c)_
_[≥]_ [3][/][2]
**Incorrect proof: (Generated by Gemini 1.5 Pro)**
Transformation: First, let’s simplify the given expression by multiplying the numerator and denominator of each term by abc (which is equal to 1 due to the condition abc = 1):
1 1 1 _ab_ _ac_ _bc_
_c[3](a + b) [+]_ _b[3](a + c) [+]_ _a[3](b + c) [=]_ _c[2](a + b) [+]_ _b[2](a + c) [+]_ _a[2](b + c)_
Applying AM-HM: Now, consider the following positive numbers:
_x1 = c[2](a + b), x2 = b[2](a + c), x3 = a[2](b + c)_
Applying AM-HM inequality to these:
_c[2](a + b) + b[2](a + c) + a[2](b + c)_ 3
3 _≥_ _c[3](a1+b)_ [+] _b[3](a1+c)_ [+] _a[3](b1+c)_
Simplifying: Expanding and simplifying the left side:
_a[2]b + ab[2]_ + ac[2] + a[2]c + b[2]c + bc[2] 3
3 _≥_ _ab_ _ac_ _bc_
_c[2](a+b)_ [+] _b[2](a+c)_ [+] _a[2](b+c)_
Since abc = 1, we can substitute and further simplify:
_ab/c + ac/b + bc/a_ 3
3 _≥_ _c[3](a1+b)_ [+] _b[3](a1+c)_ [+] _a[3](b1+c)_
Final Step: Invert both sides of the inequality (remembering to flip the inequality sign):
1 1 1
_c[3](a + b) [+]_ _b[3](a + c) [+]_ _a[3](b + c)_
_[≥]_ [3][/][2]
This proves the required inequality.
Figure 10: Incorrect proof generated by Gemini 1.5 Pro. The errors are labeled in blue.
-----
**Statement: (IMO 1995 P2) Given positive real numbers a, b, c such that abc = 1, show that**
1 1 1
_c[3](a + b) [+]_ _b[3](a + c) [+]_ _a[3](b + c)_
_[≥]_ [3][/][2]
**Incorrect proof: (Generated by ChatGPT4 Turbo)**
To prove this inequality, we can apply the AM-GM (Arithmetic Mean - Geometric Mean) inequality
as follows:
Given abc = 1:
By applying the AM-GM inequality:
_a/b + b/a ≥_ 2, b/c + c/b ≥ 2, c/a + a/c ≥ 2
Summing these inequalities:
(a/b + b/a) + (b/c + c/b) + (c/a + a/c) ≥ 6
Rearrange and factor:
_a[2]/b[2]_ + b[2]/c[2] + c[2]/a[2] + b[2]/a[2] + c[2]/b[2] + a[2]/c[2] _≥_ 6
Now consider the expressions under the original inequality:
1 1
_a[3](b + c) [=]_ _a[3](b + c)_
Since abc = 1, we can write this as:
_b[3]c[3]_ (bc)[3]
=
_a[3](b + c) [=]_ _a[3](b + c)_
By AM-GM, b + c ≥ 2√bc. Therefore:
1 (bc)[3] = _b[2]c[2]_ _√bc =_ _[b][5][/][2][c][5][/][2]_
_a[3](b + c)_ 2a[3][√]bc 2a[3][√]bc _·_ 2a[3]
_[≥]_
We can apply similar transformations to the other fractions:
1 1
_,_
_b[3](c + a)_ 2b[3] _c[3](a + b)_ 2c[3]
_[≥]_ _[a][5][/][2][c][5][/][2]_ _[≥]_ _[a][5][/][2][b][5][/][2]_
Adding these up:
1 1 1
+ _[a][5][/][2][c][5][/][2]_ + _[a][5][/][2][b][5][/][2]_
_a[3](b + c) [+]_ _b[3](c + a) [+]_ _c[3](a + b)_ 2a[3] 2b[3] 2c[3]
_[≥]_ _[b][5][/][2][c][5][/][2]_
Using our earlier inequality
(a[2]/b[2] + b[2]/a[2] + b[2]/c[2] + c[2]/b[2] + c[2]/a[2] + a[2]/c[2] _≥_ 6)
we can apply the generalized AM-GM inequality:
1 1 1
+ _[a][5][/][2][c][5][/][2]_ + _[a][5][/][2][b][5][/][2]_ )
_a[3](b + c) [+]_ _b[3](c + a) [+]_ _c[3](a + b)_ 2 _a[3]_ _b[3]_ _c[3]_
_[≥]_ [1] _[·][ (]_ _[b][5][/][2][c][5][/][2]_
Each of these terms is equal to 1 by our substitution using abc = 1, simplifying this expression:
_≥_ 1/2 + 1/2 + 1/2 = 3/2
Therefore, we have proved:
1 1 1
_a[3](b + c) [+]_ _b[3](c + a) [+]_ _c[3](a + b)_
_[≥]_ [3][/][2]
Figure 11: Incorrect proof generated by ChatGPT4 Turbo. The errors are labeled in blue.
-----
**Statement: (IMO 1995 P2) Given positive real numbers a, b, c such that abc = 1, show that**
1 1 1
_c[3](a + b) [+]_ _b[3](a + c) [+]_ _a[3](b + c)_
_[≥]_ [3][/][2]
**Incorrect proof: (Generated by ChatGPT 4)**
The Arithmetic Mean - Geometric Mean (AM-GM) Inequality states:
_x1 + x2 +_ + xn _n_
_· · ·_ _√x1_ _x2 . . . xn_
_n_ _≥_ _·_
Analyzing Each Fractionwith equality if and only if x1 = x2 · · · = xn. Let’s consider using AM-GM on the denominators.
For _c[3](a1+b)_ [: Since][ abc][ = 1][, we have][ c][3][ =] _a[3]1b[3][ . Hence,]_
1
_c[3](a + b) [=][ a]a +[3][b] b[3]_
Similiarly:
1 1
_b[3](a + c) [=][ a]a +[3][c] c[3]_ _[,]_ _a[3](b + c) [=][ b]b +[3][c] c[3]_
Here each fraction can be analyzed by using AM-GM on the denominators.
_a + b ≥_ 2√ab, b + c ≥ 2√bc, c + a ≥ 2[√]ca
_c_
Given abc = 1, we also have _√ab =_ _√abc_ [=][ c][ and similarly for others.][ Thus,]
_a + b ≥_ 2c, b + c ≥ 2a, c + a ≥ 2b
Substituting back:
_a[3]b[3]_
_a + b_ 2c [, b]b +[3][c] c[3] 2a [, c]c +[3][a] a[3] 2b
_[≥]_ _[a][3][b][3]_ _[≥]_ _[b][3][c][3]_ _[≥]_ _[c][3][a][3]_
Since a[3]b[3] = c[3], b[3]c[3] = a[3], c[3]a[3] = b[3] from abc = 1,
_a[3]b[3]_
= _[c][2]_ = _[b][2]_
2c 2 _[, b]2[3]a[c][3]_ [=][ a]2[2] _[, c][3]2[a]b[3]_ 2
Adding these,
_a[2]_
2 [+][ b]2 [2] [+][ c]2 [2] [=][ a][2][ +][ b]2[2][ +][ c][2]
To conclude, we need the final step:
_a[2]_ + b[2] + c[2] _≥_ 3
3
which holds by AM-GM as a[2] + b[2] + c[2] _≥_ 3√a[2]b[2]c[2] = 3. This completes the proof.
Figure 12: Incorrect proof generated by ChatGPT 4. The errors are labeled in blue.
-----
**B.5.3** **Testing Examples from LeanCopilot**
We also test LeanCopilot (developed from LeanDojo Yang et al. (2024)), the current open-source
state-of-the-art theorem prover based on Lean. It solves none of the 20 problems, all resulting in
time-limited feedback. We present some results of the tactic search provided by LeanCopilot in
Figure 13. It can be seen that the prover tends to rely heavily on built-in tactics and struggles to
predict complex premises.
USAMO−1997−P5 :
theorem ( a b c :R) ( h0 : a >0)( h1 : b >0)( h2 : c >0):
1 / ( a^3+b^3+a*b*c ) + 1 / ( b^3+c^3+a*b*c ) + 1 / ( c^3+a^3+a*b*c ) <=1/( a*b*c )
Try t h e s e :
nth_rw 1 [ \ l mul_one ( a*b*c ) ] # Replace abc by 1* abc
r i n g _ n f # Simplify by r i n g axiom
f i e l d _ s i m p # Simplify by f i e l d axiom
r e f i n e ’ le_of_eq _ # Proving i n e q u a l i t y by e q u a l i t y
rw [ one_div ] # Replace 1/ x by x ^( −1)
nth_rw 3 [ \ l mul_one ( a*b*c ) ] # Replace abc by 1* abc
rw [ l e _ d i v _ i f f ’ ] # Multiply abc on both s i d e s
IMO−1995−P2 :
theorem ( a b c :R) ( h0 : a >0)( h1 : b >0)( h2 : c >0)( h3 : a*b*c =1):
1 / ( c ^3*3*( a+b ) ) + 1 / ( b ^3*3*( a+c ) ) + 1 / ( a ^3*3*( b+c ) ) >= 3/2
Try t h e s e :
r e f i n e ’ le_of_eq _ # Proving i n e q u a l i t y by e q u a l i t y
norm_num # Normalize numerical e x p r e s s i o n s
rw [ \ l h3 ] # Replace 1 by abc
f i e l d _ s i m p # Simplify by f i e l d axiom
r i n g _ n f # Simplify by r i n g axiom
f i e l d _ s i m p [ h1, h2 ] # Simplify by f i e l d axiom + h1, h2
push_cast #Move c e r t a i n c o e r c i o n s inward
Figure 13: Tactics suggested by LeanCopilot to two problems, namely USAMO-1997-P5 and IMO-1995**P2.**
**B.5.4** **10 Problems Solved by Our AIPS**
When proving an inequality, AIPS first homogenizes both sides using the given conditions if the
inequality is not already homogenized, thereby obtaining a new inequality. It then performs mixed
reasoning on the new inequality to complete the proof. We present the proofs for the 10 problems
solved by our AIPS as follows.
-----
**1. Solution to IMO-1990-Shortlist Problem**
By <function try_homo>, It is equivalent to prove
_a[3]_ _b[3]_ _c[3]_ _d[3]_
_b + c + d_ [+] _a + c + d_ [+] _a + b + d_ [+] _a + b + c_ 3 [+][ ad]3 [+][ bc]3 [+][ cd]3
_[≥]_ _[ab]_
by <function check_AM_GM_Mul2>, it remains to prove
_a[2]_ _a[3]_ _b[3]_ _c[3]_ _d[3]_
3 [+][ b]3 [2] [+][ c]3 [2] [+][ d]3[2] _b + c + d_ [+] _a + c + d_ [+] _a + b + d_ [+] _a + b + c_
_[≤]_
by <function try_together_l>, it remains to prove
_a[2]_ + b[2] + c[2] + d[2] _a[3]_ _b[3]_ _c[3]_ _d[3]_
3 _≤_ _b + c + d_ [+] _a + c + d_ [+] _a + b + d_ [+] _a + b + c_
we use Hölder’s inequality:
(a[2] + b[2] + c[2] + d[2])[2] _≤_
(a(b + c + d) + b(a + c + d) + c(a + b + d) + d(a + b + c))×
(a[3]/(b + c + d) + b[3]/(a + c + d) + c[3]/(a + b + d) + d[3]/(a + b + c)).
It remains to prove
_a[2]_ + b[2] + c[2] + d[2] _a[2]_ + b[2] + c[2] + d[2][][2]
3 _≤_ _a (b + c + d) + b (a + c + d) + c (a + b + d) + d (a + b + c)_
by <function all_cyc_mul_expr>, it remains to prove
1 _a[2]_ + b[2] + c[2] + d[2]
3 _a (b + c + d) + b (a + c + d) + c (a + b + d) + d (a + b + c)_
_[≤]_
For f (x) = x[2], f _[′′](x) > 0 for 0 < x. we use Jensen’s inequality:_
4(a/4 + b/4 + c/4 + d/4)[2] _≤_ _a[2]_ + b[2] + c[2] + d[2],
it remains to prove
_a_ 2
1 4 4 [+][ b]4 [+][ c]4 [+][ d]4
3 _a (b + c + d) + b (a + c + d) + c (a + b + d) + d (a + b + c)_
_[≤]_
For f (x) = x(a + b + c + d − _x), f_ _[′′](x) < 0 for 0 < x < a + b + c + d, we use Jensen’s inequality:_
_a(b + c + d) + b(a + c + d) + c(a + b + d) + d(a + b + c) ≤_
4(a/4 + b/4 + c/4 + d/4)(3a/4 + 3b/4 + 3c/4 + 3d/4)
it remains to prove
1 _a4_ [+][ b]4 [+][ c]4 [+][ d]4
3 3a
_[≤]_ 4 [+][ 3]4[b] [+][ 3]4[c] [+][ 3]4[d]
by <function try_simp_r>, this is true!
-----
**2. Solution to IMO-1993-Shortlist problem.**
To prove
_a_ _b_ _c_ _d_
_b + 2c + 3d_ [+] 3a + c + 2d [+] 2a + 3b + d [+] _a + 2b + 3c_ 3
_[≥]_ [2]
we use Hölder’s inequality:
_a_ _b_ _c_ _d_
(a + b + c + d)[2] (
_≤_ _b + 2c + 3d_ [) +] 3a + c + 2d [+] 2a + 3b + d [+] _a + 2b + 3c_ [)][×]
(a(b + 2c + 3d) + b(3a + c + 2d) + c(2a + 3b + d) + d(a + 2b + 3c)).
It remains to prove
2 (a + b + c + d)[2]
3 4ab + 4ac + 4ad + 4bc + 4bd + 4cd
_[≤]_
by <function all_cyc_mul_expr>, it remains to prove
2 1
3 (a + b + c + d)[2][ ≤] 4ab + 4ac + 4ad + 4bc + 4bd + 4cd
by <function try_expand_l>, it remains to prove
2
3a[2] + 6ab + 6ac + 6ad + 3b[2] + 6bc + 6bd + 3c[2] + 6cd + 3d[2][ ≤]
1
4ab + 4ac + 4ad + 4bc + 4bd + 4cd
by <function nodiv_expr>, it remains to prove
8ab + 8ac + 8ad + 8bc + 8bd + 8cd ≤ 3a[2] + 6ab + 6ac + 6ad + 3b[2] + 6bc + 6bd + 3c[2] + 6cd + 3d[2]
by <function zero_side>, it remains to prove
0 ≤ 3a[2] _−_ 2ab − 2ac − 2ad + 3b[2] _−_ 2bc − 2bd + 3c[2] _−_ 2cd + 3d[2]
by <function check_AM_GM_Mul2>, it remains to prove
0 ≤ 2a[2] _−_ 2ab − 2ad + 2b[2] _−_ 2bc + 2c[2] _−_ 2cd + 2d[2]
by <function check_AM_GM_Mul2>, this is true!
**3. Solution to IMO-1995-P2**
By <function try_homo>, it is equivalent to prove
2 2 2
_a[2]b[2]_ _b[2]c[2]_ _a[2]c[2]_ 3 b 3 c 3
_c(a + b) [+]_ _a(b + c) [+]_ _b(a + c)_ 2
_[≥]_ [3][a]
We use Hölder’s inequality:
_ab + bc + ca_ _a[2]b[2]_ _b[2]c[2]_ _a[2]c[2]_
(c(a + b) + a(b + c) + b(c + a))(
2 _≤_ _c(a + b) [+]_ _a(b + c) [+]_ _b(a + c)_ [)][.]
It remains to prove
2 2 2
3a 3 b 3 c 3
2 _≤_ _[ab][ +][ bc]2_ [ +][ ca]
by <function check_AM_GM>, this is true!
**4. Solution to USAMO-1997-P5.**
To prove
1 1 1 1
_abc + b[3]_ + c[3][ +] _a[3]_ + abc + c[3][ +] _a[3]_ + abc + b[3][ ≤] _abc_
by <function check_SimpMuirhead>, it remains to prove
1 1 1 1
_abc + b[2]c + bc[2][ +]_ _a[2]c + abc + ac[2][ +]_ _a[2]b + ab[2]_ + abc _abc_
_[≤]_
by <function try_together_l>, this is true!
-----
**5. Solution to 2001-IMO-P2.**
To prove
_a_ _b_ _c_
_√a[2]_ + 8bc [+] _√8ac + b[2][ +]_ _√8ab + c[2][ ≥]_ [1][,]
we use Hölder’s inequality:
_a_ _b_ _c_ (a + b +2 _c)[3]_ _≤_
_√a[2]_ + 8bc [+] _√8ac + b[2][ +]_ _√8ab + c[2]_ _a(a[2]_ + 8bc) + b(8ac + b[2]) + c(8ab + c[2]) _._
It remains to prove
3
(a + b + c) 2
1 ≤ _√a[3]_ + 24abc + b[3] + c[3][ .]
by <function all_cyc_mul_expr>, it remains to prove
3
_a[3]_ + 24abc + b[3] + c[3] _≤_ (a + b + c) 2
by <function no_pow>, it remains to provep
_a[3]_ + 24abc + b[3] + c[3] _≤_ (a + b + c)[3]
by <function zero_side>, it remains to prove
0 ≤−a[3] _−_ 24abc − _b[3]_ _−_ _c[3]_ + (a + b + c)[3]
by <function try_expand_r>, it remains to prove
0 ≤ 3a[2]b + 3a[2]c + 3ab[2] _−_ 18abc + 3ac[2] + 3b[2]c + 3bc[2]
by <function check_AM_GM>, it remains to prove
0 ≤ 3a[2]b − 9abc + 3ac[2] + 3b[2]c
by <function check_AM_GM>, this is true!
**6. Solution to USAMO-2003-P5.**
To prove
(a + b + 2c)[2]
2c[2] + (a + b)[2][ + (]2b[a][2][ + 2]+ ([b]a[ +] +[ c] c[)])[2][2][ + (2]2a[2][a][ +]+ ([ b]b[ +] +[ c] c[)])[2][2][ ≤] [8]
we have
(x + 1)[2]
_f_ (x) = for 0 < x < 1
(1 − _x)[2]_ + 2x[2][ ≤] [12][x]3[ + 4]
0 for 0 < x < 1,
_⇐⇒−_ [(3]3[x] (3[ −]x[1)][2] [2][ ·][ (4]2x[x] + 1)[ + 1)] _≤_
_·_ _−_
which is true.
_c_
Substitute x for
_a + b + c_ [, we have]
(a + b + 2c)[2] 4c
2c[2] + (a + b)[2][ ≤] _a + b + c_ [+ 4]3
It remains to prove
4a 4b 4c
_a + b + c_ [+] _a + b + c_ [+] _a + b + c_ [+ 4][ ≤] [8]
by <function try_together_l>, this is true!
-----
**7. Solution to Polish-2004 Problem**
We use Hölder’s inequality:
_a_ _b_ _c_ _d_
(a + b + c + d)[4] ( 1 + 1 + 1 + 1 )[3]
_≤_ (a[3] + 63bcd) 3 (63acd + b[3]) 3 (63abd + c[3]) 3 (63abc + d[3]) 3 _×_
(a(a[3] + 63bcd) + b(b[3] + 63acd) + c(c[3] + 63abd) + d(d[3] + 63abc)).
It remains to prove
4
(a + b + c + d) 3
1 1 _,_
_≤_ (a[4] + 252abcd + b[4] + c[4] + d[4]) 3
by <function no_pow>, it remains to prove
(a + b + c + d)[4]
1
_≤_ _a[4]_ + 252abcd + b[4] + c[4] + d[4][ ,]
by <function nodiv_expr>, it remains to prove
_a[4]_ + 252abcd + b[4] + c[4] + d[4] _≤_ (a + b + c + d)[4],
by <function zero_side>, it remains to prove
0 ≤−a[4] _−_ 252abcd − _b[4]_ _−_ _c[4]_ _−_ _d[4]_ + (a + b + c + d)[4]
by <function try_expand_r>, it remains to prove
0 ≤ 4a[3]b + 4a[3]c + 4a[3]d + 6a[2]b[2] + 12a[2]bc + 12a[2]bd + 6a[2]c[2] + 12a[2]cd + 6a[2]d[2] + 4ab[3] _. . ._
by <function check_AM_GM>, it remains to prove
0 ≤ 4a[3]b + 4a[3]c + 4a[3]d + 6a[2]b[2] + 12a[2]bc + 12a[2]bd + 12a[2]cd + 6a[2]d[2] + 4ab[3] _. . ._
by <function sep_neg>, it remains to prove
216abcd ≤ 4a[3]b + 4a[3]c + 4a[3]d + 6a[2]b[2] + 12a[2]bc + 12a[2]bd + 12a[2]cd + 6a[2]d[2] _. . ._
by <function check_AM_GM>, it remains to prove
216abcd ≤ 4a[3]b + 4a[3]c + 4a[3]d + 12a[2]bc + 12a[2]bd + 12a[2]cd + 4ab[3] + . . .
by <function check_AM_GM>, it remains to prove
216abcd ≤ 4a[3]b + 4a[3]c + 12a[2]bc + 12a[2]bd + 12a[2]cd + 12ab[2]c + 12ab[2]d + 12abc[2] + . . .
by <function check_AM_GM>, it remains to prove
216abcd ≤ 4a[3]b + 4a[3]c + 12a[2]bc + 12a[2]cd + 12ab[2]d + 12abc[2] + 88abcd + 12abd[2] + . . .
by <function check_AM_GM>, it remains to prove
216abcd ≤ 4a[3]b + 4a[3]c + 12a[2]bc + 136abcd + 12abd[2] + 4ac[3] + 12ac[2]d
+4ad[3] + 4b[3]c + 4b[3]d + 12b[2]cd + 4bd[3] + 4c[3]d
by <function check_AM_GM>, it remains to prove
216abcd ≤ 4a[3]b + 4a[3]c + 184abcd + 4ac[3] + 4ad[3] + 4b[3]c + 4b[3]d + 4bd[3] + 4c[3]d
by <function zero_side>, it remains to prove
0 ≤ 4a[3]b + 4a[3]c − 32abcd + 4ac[3] + 4ad[3] + 4b[3]c + 4b[3]d + 4bd[3] + 4c[3]d
by <function check_AM_GM>, it remains to prove
0 ≤ 4a[3]b − 16abcd + 4ad[3] + 4b[3]c + 4c[3]d
by <function check_AM_GM>, this is true!
-----
**8. Solution to USA-IMO-Team-Selection-2010-P2.**
By <function try_homo>, it is equivalent to prove
2 2 2
_a[3]b[3]_ _a[3]c[3]_ _b[3]c[3]_ 3 b 3 c 3
_c[2]_ (a + 2b)[2][ +] _b[2]_ (2a + c)[2][ +] _a[2]_ (b + 2c)[2][ ≥] _[a]_ 3
we use Hölder’s inequality:
(ab + ac + bc)[3] _≤_ (a(b + 2c) + b(2a + c) + c(a + 2b))[2]×
(a[3]b[3]/(c[2](a + 2b)[2]) + a[3]c[3]/(b[2](2a + c)[2]) + b[3]c[3]/(a[2](b + 2c)[2])).
It remains to prove
2 2 2
_a_ 3 b 3 c 3
3 _≤_ _[ab]9 [+][ ac]9 [+][ bc]9_
by <function check_AM_GM>, this is true!
**9. Solution to Korea-2011-P4.**
To prove
1 1 1
_a[2]_ 4a + 9 [+] _b[2]_ 4b + 9 [+] _c[2]_ 4c + 9 18 _[,]_
_−_ _−_ _−_ _[≤]_ [7]
we have
_f_ (x) = 1/(x[2] 4x + 9) for 0 < x < 1
_−_ _≤_ [2 +]18[ x]
_x (x −_ 1)[2]
_⇐⇒_ _−_ 18 (x[2] 4x + 9)
_−_ _[≤]_ [0][ for][ 0][ < x <][ 1][,]
which is true. Substitute x for a/(a + b + c), we have
(a + b + c)[2] 3a + 2b + 2c
1/(a[2] 4a + 9) =
_−_ _a[2]_ 4a (a + b + c) + 9 (a + b + c)[2][ ≤] 18a + 18b + 18c _[.]_
_−_
It remains to prove
3a + 2b + 2c 2a + 3b + 2c 2a + 2b + 3c
18a + 18b + 18c [+] 18a + 18b + 18c [+] 18a + 18b + 18c _[≤]_ 18[7] _[,]_
by <function try_together_l>, this is true.
-----
**10. Solution to Japan-2014-P5**
By <function try_homo>, it is equivalent to prove
_a (a + b + c)_ _b (a + b + c)_
9bc + 4 (b _c)[2]_ + (a + b + c)[2][ +] 9ac + 4 ( _a + c)[2]_ + (a + b + c)[2][ +]
_−_ _−_
_c (a + b + c)_
9ab + 4 (a _b)[2]_ + (a + b + c)[2][ ≥] 2[1] _[.]_
_−_
We use Hölder’s inequality:
(a + b + c)[3] _≤_
_a (a + b + c)_ _b (a + b + c)_ _c (a + b + c)_
(
9bc + 4 (b _c)[2]_ + (a + b + c)[2][ +] 9ac + 4 ( _a + c)[2]_ + (a + b + c)[2][ +] 9ab + 4 (a _b)[2]_ + (a + b + c)[2][ )][×]
_−_ _−_ _−_
_a_ 9a[2] + 4 (b − _c)[2]_ + (a + b + c)[2][] + b 9b[2] + 4 (−a + c)[2] + (a + b + c)[2][] +
n
_c_ 9c[2] + 4 (a _b)[2]_ + (a + b + c)[2][o]
_−_
It remains to prove
1 (a + b + c)[3]
2 27abc + 4a (b _c)[2]_ + a (a + b + c)[2] + 4b (a _c)[2]_ + b (a + b + c)[2] + 4c (a _b)[2]_ + c (a + b + c)[2]
_[≤]_ _−_ _−_ _−_
by <function nodiv_expr>, it remains to prove
27abc + 4a (b − _c)[2]_ + a (a + b + c)[2] + 4b (a − _c)[2]_ + b (a + b + c)[2] + 4c (a − _b)[2]_ + c (a + b + c)[2]
_≤_ 2 (a + b + c)[3]
by <function zero_side>, it remains to prove
0 ≤−27abc − 4a (b − _c)[2]_ _−_ _a (a + b + c)[2]_ _−_ 4b (a − _c)[2]_ _−_ _b (a + b + c)[2]_ _−_ 4c (a − _b)[2]_
_−c (a + b + c)[2]_ + 2 (a + b + c)[3]
by <function try_expand_r>, it remains to prove
0 ≤ _a[3]_ _−_ _a[2]b −_ _a[2]c −_ _ab[2]_ + 3abc − _ac[2]_ + b[3] _−_ _b[2]c −_ _bc[2]_ + c[3]
by <function check_schur>, this is true!
-----
**C** **Human Evaluation of Generated Synthetic Theorems**
We select 10 synthetic problems generated by our AIPS for evaluation, and 4 IMO problems for
comparison. Then, we invite three professional contestants to evaluate the difficulty and elegance
of these 14 problems. Two of the evaluators are National Mathematical Olympiad gold medalists,
and one is a silver medalist. The difficulty and elegance are needed to assign a score from 1 to 7,
respectively.
**C.1** **10 Synthetic Theorems and 4 Comparison IMO Problems**
**C.1.1** **10 Synthetic Theorems**
- (Problem1)
Given a, b, c > 0, then
(a + b + c)[3] 4a 4b 4c
(ab + bc + ca)[2][ ≤] (b + c)[2][ +] (c + a)[2][ +] (a + b)[2]
- (Problem2)
Given a, b, c > 0, then
27(a[2] + b[2])[2](b[2] + c[2])[2](c[2] + a[2])[2]
(a[4] + b[4] + c[4] + 3a[2]b[2] + 3b[2]c[2] + 3c[2]a[2])[3][ ≤] [1]
- (Problem3)
Given a, b, c > 0, then
_abc(a + b + c)[3]_
3(ab + bc + ca)(a[3]c + ab[3] + bc[3])
_[≤]_ [1]
- (Problem4)
Given a, b, c > 0, then
2a 2b 2c
_√2a[2]_ + b[2] + c[2][ +] _√2b[2]_ + c[2] + a[2][ +] _√2c[2]_ + a[2] + b[2][ ≤]
3√2(a + b + c)
5a[2] + 5b[2] + 5c[2] + ab + bc + ca
- (Problem5)
Given a, b, c > 0, then
_√6(a + b + c)[2]_
6√a[4] + b[4] + c[4] + a[2]b[2] + b[2]c[2] + c[2]a[2]
_c_
_√2c[2]_ + a[2] + b[2]
2a[2] + b[2] + c[2]
2b[2] + c[2] + a[2]
- (Problem6)
Given a, b, c > 0, then
3
2(a + b + c) 2 ≤ (√a + b + _√b + c +_ _[√]c + a)√a[2]_ + b[2] + c[2] + ab + bc + ca
- (Problem7)
Given a, b, c > 0, then
3
(a[4] + b[4] + c[4]) 2 _a[5]_ _b[5]_ _c[5]_
_√ab[2]_ + bc[2] + ca[2] _−_ _abc√a + b + c_ _≤_ _√ca + b[2][ +]_ _√ab + c[2][ +]_ _√bc + a[2]_
- (Problem8)
Given a, b, c > 0, then
54abc + (a + b + c)[3]
_√a[2]_ + 2bc + _√2ab + c[2]_ + _√2ac + b[2][][2][ ≤]_ _[a][ +][ b][ +][ c]_
- (Problem9)
Given a, b, c > 0, then
_a[2]b_ _ac[2]_ _b[2]c_
(a + b)[3][ +] (a + c)[3][ +] (b + c)[3][ ≤] 8[3]
- (Problem10)
Given a, b, c > 0, then
-----
(ab + ac + bc)[2] _a[2]b_ _b[2]c_
_a[2]_ + b[2] + c[2][√]a[2] + b[2] + c[2] + 3ab + 3bc + 3ca _≤_ _√b[2]_ + 3ac + _√c[2]_ + 3ab
_c[2]a_
_√a[2]_ + 3bc
**C.1.2** **4 IMO Problems**
- (1995-imo-2)
Given a, b, c > 0 and abc = 1, then
- (2001-imo-2)
1
_a[3]_ (b + c) 2
_[≥]_ [3]
_c[3]_ (a + b) [+]
_b[3]_ (a + c) [+]
Given a, b, c > 0, then
Given a, b, c > 0, then _√a[2]_ + 8bc + _√8ac + b[2][ +]_ _√8ab + c[2][ ≥]_ [1]
- (2006-imo-3)
Assume9 _a, b, c are three real numbers, then |ab(a[2]_ _−_ _b[2]) + bc(b[2]_ _−_ _c[2]) + ca(c[2]_ _−_ _a[2])| ≤_
16√2 [(][a][2][ +][ b][2][ +][ c][2][)][2]
8ac + b[2][ +]
_a[2]_ + 8bc
- (2020-imo-2)
Assume a _≥_ _b_ _≥_ _c_ _≥_ _d_ _≥_ 0 and a + b + c + d = 1, prove that
_a[a]b[b]c[c]d[d]_ (a + 2b + 3c + 4d) < 1
**C.2** **Human Evaluation Results**
The rating scores by the three professional contestants are reported in Table 2. The third expert does
not assign scores to the four IMO problems, believing the average difficulty of the ten problems is
significantly lower than that of IMO problems. The first expert does not give a difficulty score for
Problem 8 because he does not solve it. From the table, we observe that while the average difficulty
does not compare with IMO inequalities, a few problems, such as Problem 9 and Problem 7, reach
the IMO level.
Table 2: Scores given by human experts on synthetic theorems and IMO problems. Scores range
|m 1 to 7. GM de|enotes gold medalist, and|Col3|SM denotes silver medali|Col5|ist.|Col7|
|---|---|---|---|---|---|---|
|Problem|Expert 1 (GM)||Expert 2 (GM)||Expert 3 (SM)||
||Difficulty|Elegance|Difficulty|Elegance|Difficulty|Elegance|
|1|2|2|2|3|1|2.5|
|2|1|1|1|2|1|1|
|3|2|1|4|2|1.5|1|
|4|3|2|3|2|2|1.5|
|5|2|1|2|2|1.5|1|
|6|2|2|2|2|1.5|1.5|
|7|5|1|4|2|2|2|
|8|NA|2|3|2|1|1.5|
|9|4|3|4|5|2.5|2|
|10|4|1|3|1|1|1.5|
|IMO-1995-2|2|4|3|5|NA|NA|
|IMO-2001-2|3|4|3|5|NA|NA|
|IMO-2006-3|3|3|5|3|NA|NA|
|IMO-2020-2|2|2|4|3|NA|NA|
-----
**C.3** **Synthetic Theorem Selected for Mathematical Olympiad**
Among the 10 synthetic problems above, problem 4 was chosen as a competition problem in a
major city’s 2024 Mathematical Olympiad, as shown in Fig. 14. It received positive feedback for its
appropriate difficulty, concise form, and variety of solutions. This problem was posted online, and
75 contestants provided their evaluations on its difficulty and elegance. The score distributions are
shown in Fig. 15. The average difficulty score was 3.3 out of 7, and the elegance score was 2.2 out of
5. The 4 solutions to this problem, including one provided by our AIPS and 3 solutions collected
from the competition organizers, are given as follows.
**Problem: Given three positive real numbers a, b, c, prove that**
2a 2b 2c 3√2(a + b + c)
_√2a[2]_ + b[2] + c[2][ +] _√2b[2]_ + c[2] + a[2][ +] _√2c[2]_ + a[2] + b[2][ ≤] _√5a[2]_ + 5b[2] + 5c[2] + ab + bc + ca
Figure 14: Selected theorem for a major city’s Mathematical Olympiad.
Figure 15: Score distributions evaluated by 75 contestants online.
-----
**Proof 1. (Modified from AIPS’ proof)**
6x( _a[2]_ _b[2]_ _c[2])_
_f_ _[′′](x) =_ (a[2] +− b[2] +− c[2] +− x[2]) 25 _< 0 for x satisfying 0 < x < a[2]_ + b[2] + c[2], where f (x) =
2x 2 _[a][+]3[b][+][c]_
_·_
_√x[2]_ + a[2] + b[2] + c[2][ . By Jensen’s inequality,][ LHS][ ≤] [3][ ·] _a+b+c_ 2 [. It]
_a[2]_ + b[2] + c[2] + 3
suffices to prove q
2 _[a][+]3[b][+][c]_ 3√2(a + b + c)
3 _·_
_·_ _a+b+c_ 2 _√5a[2]_ + 5b[2] + 5c[2] + ab + bc + ca _[.]_
_a[2]_ + b[2] + c[2] + 3 _[≤]_
Expanding the left-hand side, this is true.q
**Proof 2. (Given by Humans)**
Without loss of generality, assume a ≥ _b ≥_ _c and a[2]_ + b[2] + c[2] = 1. Then the inequality in
question is equivalent to
_a_ 3(a + b + c)
_√1 + a[2][ ≤]_ 9 + (a + b + c)[2]
X
Notice that p
_a_ 1 1 _b_
1 1
_√1 + a[2][ =]_ _−_ 1 + a[2][ ≥] _−_ 1 + b[2][ =] _√1 + b[2]_
r r
By Chebyshev inequality, we get
_a_
( 1 + a[2])( _√1 + a[2][ )][ ≤]_ [3(][a][ +][ b][ +][ c][)][.]
X p X
Then it suffices to prove
1 + a[2] _≥_ 9 + (a + b + c)[2]
X p p
which is equivalent to show 6 + 2 _ab ≤_ 2 1 + a[2][√]1 + b[2]. Notice that
1 + ab 1 + a[2][p]1 + b[2] 2ab _a[2]_ + b[2]
_≤_ [P] [P][ √]⇐⇒ _≤_
and the right-hand-side holds by AM-GM inequality. Therefore we have finished the proof.p
-----
**Proof 3. (Given by Humans)**
First we divide the proof into two subgoals:
3√2(a + b + c) 2(a + b + c)
_√5a[2]_ + 5b[2] + 5c[2] + ab + bc + ca 4 (1)
_[≥]_
3 [(][a][2][ +][ b][2][ +][ c][2][)]
r
and
2a 2a
(2)
4 _≥_ _√2a[2]_ + b[2] + c[2]
3 [(][a][2][ +][ b][2][ +][ c][2][)]
X X
q
Where denotes cyclic summation. The proof of (1) follows from the fact that a[2]+b[2]+c[2] _≥_
_ab + bc + ca. For the second part, we apply Chebyshev’s inequality._
Without loss of generality, we assume a _b_ _c. First notice that_
[P] _≥_ _≥_
2a 2a
_xa(2a[2]_ _b[2]_ _c[2])_ (3)
4 _−_ _√2a[2]_ + b[2] + c[2][ = 1]3 _−_ _−_
3 [(][a][2][ +][ b][2][ +][ c][2][)]
X X
where q
2a
_xa =_
4 4
3 [(][a][2][ +][ b][2][ +][ c][2][)]√2a[2] + b[2] + c[2]( 3 [(][a][2][ +][ b][2][ +][ c][2][) +] 2a[2] + b[2] + c[2])
andshow two inequalities: xb, xcq are defined similarly. We claim thatq xa ≥ _xb ≥_ _xc. For xpa ≥_ _xb, it suffice to_
_a_ _a[2]_ + 2b[2] + c[2] _≥_ _b_ 2a[2] + b[2] + c[2]
_ap(a[2]_ + 2b[2] + c[2]) ≥ _bp(2a[2]_ + b[2] + c[2])
Both can be proven by factorization, and the proof of xb ≥ _xc is similar._
Since a ≥ _b ≥_ _c, we get 2a[2]_ _−_ _b[2]_ _−_ _c[2]_ _≥_ 2b[2] _−_ _c[2]_ _−_ _a[2]_ _≥_ 2c[2] _−_ _a[2]_ _−_ _b[2]. Combining_
with xa ≥ _xb ≥_ _xc and applying Chebyshev’s inequality, we get_ _xa(2a[2]_ _−_ _b[2]_ _−_ _c[2]) ≥_ 0.
Finally, combining with (3), we conclude that (2) is proved.
[P]
**Proof 4. (Given by Humans)**
Let S = a[2] + b[2] + c[2] and t = _[a][ +][ b][ +][ c]_ . Substituting into the inequality and rearranging:
3
2a
LHS = _√S + a[2]_
X
RHS = ((2(t[2] + S)[−] [1]2 − 2t[2](t[2] + S)[−] 2[3] )(a − _t) + 2t(t[2]_ + S)[−] 2[1] )
It suffice to show X
2a
3
_√a[2]_ + S (t[2] + S) 2
_[≤]_ [2][S][(][a][ −] _[t][) + 2][t][(][t][2][ +][ S][)]_
which is equivalent to
3a[2]t[4]S + 3a[2]t[2]S[2] _≤_ (2Sa[3]t[3] + St[6]) + (2S[2]ta[3] + S[2]a[4])
The last inequality is proved by applying AM-GM inequality.
-----
| [
"Chenrui, Wei",
"Mengzhou, Sun",
"Wei, Wang"
] | 2024-06-20T00:00:00 | NeurIPS 2024 | true | 1 | 0 | null | http://arxiv.org/abs/2406.14219 | https://arxiv.org/abs/2406.14219 | https://www.semanticscholar.org/paper/c430d9b478ae93c5bb2d197c607fcacb9c8b0e22 |
Proving Theorems Recursively | Recent advances in automated theorem proving leverages language models to explore expanded search spaces by step-by-step proof generation. However, such approaches are usually based on short-sighted heuristics (e.g., log probability or value function scores) that potentially lead to suboptimal or even distracting subgoals, preventing us from finding longer proofs. To address this challenge, we propose POETRY (PrOvE Theorems RecursivelY), which proves theorems in a recursive, level-by-level manner in the Isabelle theorem prover. Unlike previous step-by-step methods, POETRY searches for a verifiable sketch of the proof at each level and focuses on solving the current level's theorem or conjecture. Detailed proofs of intermediate conjectures within the sketch are temporarily replaced by a placeholder tactic called sorry, deferring their proofs to subsequent levels. This approach allows the theorem to be tackled incrementally by outlining the overall theorem at the first level and then solving the intermediate conjectures at deeper levels. Experiments are conducted on the miniF2F and PISA datasets and significant performance gains are observed in our POETRY approach over state-of-the-art methods. POETRY on miniF2F achieves an average proving success rate improvement of 5.1%. Moreover, we observe a substantial increase in the maximum proof length found by POETRY, from 10 to 26. | POETRY (PrOvE Theorems RecursivelY), which proves theorems in a recursive, level-by-level manner in the Isabelle theorem prover, is proposed, which allows the theorem to be tackled incrementally by outlining the overall theorem at the first level and then solving the intermediate conjectures at deeper levels. | #### Proving Theorems Recursively
**Haiming Wang[1][∗]** **Huajian Xin[1][∗]** **Zhengying Liu[2][†]** **Wenda Li[3]**
**Yinya Huang[4]** **Jianqiao Lu[5]** **Zhicheng Yang[6]** **Jing Tang[6,7]** **Jian Yin[1][†]**
**Zhenguo Li[2]** **Xiaodan Liang[1,8][†]**
1Sun Yat-sen University 2Huawei Noah’s Ark Lab 3University of Edinburgh
4City University of Hong Kong 5HKU 6HKUST (Guangzhou) 7HKUST 8MBZUAI
{wanghm39, xinhj, issjyin}@mail2.sysu.edu.cn, [email protected], [email protected]
{liuzhengying2, Li.Zhenguo}@huawei.com, [email protected]
[email protected] {yangzhch6, xdliang328}@gmail.com
**Abstract**
Recent advances in automated theorem proving leverages language models to
explore expanded search spaces by step-by-step proof generation. However, such
approaches are usually based on short-sighted heuristics (e.g., log probability
or value function scores) that potentially lead to suboptimal or even distracting
subgoals, preventing us from finding longer proofs. To address this challenge, we
propose POETRY (PrOvE Theorems RecursivelY), which proves theorems in a
recursive, level-by-level manner in the Isabelle theorem prover. Unlike previous
step-by-step methods, POETRY searches for a verifiable sketch of the proof at each
level and focuses on solving the current level’s theorem or conjecture. Detailed
proofs of intermediate conjectures within the sketch are temporarily replaced
by a placeholder tactic called sorry, deferring their proofs to subsequent levels.
This approach allows the theorem to be tackled incrementally by outlining the
overall theorem at the first level and then solving the intermediate conjectures at
deeper levels. Experiments are conducted on the miniF2F and PISA datasets and
significant performance gains are observed in our POETRY approach over stateof-the-art methods. POETRY on miniF2F achieves an average proving success
rate improvement of 5.1%. Moreover, we observe a substantial increase in the
maximum proof length found by POETRY, from 10 to 26[2].
**1** **Introduction**
Neural theorem proving has made significant strides in recent years [Polu and Sutskever, 2020, Han
et al., 2022, Polu et al., 2022, Wang et al., 2023c, Jiang et al., 2022a, 2021, 2022b, Wang et al.,
2023b, Huang et al., 2024, Thakur et al., 2024, Liu et al., 2023, Xiong et al., 2023], particularly
with the integration of language models and search algorithms [Polu and Sutskever, 2020, Han et al.,
2022, Jiang et al., 2022a, Yang et al., 2023, Lample et al., 2022]. The combination of language
models, which excel at understanding and generating human-like text, and search algorithms, which
systematically explore potential solutions, has proven to be a powerful approach to discovering proofs
for intricate theorems.
As shown in Figure 1(a), search-based neural theorem proving methods begin with a theorem
statement to prove. A formal mathematic environment like Isabelle will first process the theorem
statement and provide the initial proof state. Starting with the initial proof state, the proving
process alternates between sampling new proof steps from the language model and obtaining new
_∗_ These authors contributed equally. Corresponding authors
_†_
[2https://github.com/wiio12/POETRY](https://github.com/wiio12/POETRY)
Preprint. Under review.
-----
|Col1|Proof Level 1 Proof Level 2 Proof Level 3|
|---|---|
|theorem(in group) int_pow_pow: assumes "x ∈ carrier G" shows "(x [^] (n :: int)) [^] (m :: int) = x [^] (n * m :: int)"|Proof Level 1 Proof Level 2 Proof Level 3 Proof Target assume n_ge: "n ≥ 0" thus ?thesis assume m_ge: "m ≥ 0" theorem(in group) int_pow_pow: proof (cases) thus ?thesis assumes "x ∈ carrier G" assume m_ge: "m ≥ 0" thus using n_ge nat_pow_pow shows "(x [^] (n :: int)) [^] (m ?thesis sorry int_pow_def2 next assume m_lt: "¬ m ≥ 0" :: int) = x [^] (n * m :: int)" assume m_lt: "¬ m ≥ 0" with n_ge show ?thesis >>> goal (1 subgoal): 1. (x [^] n) [^] m ... proof (cases) with n_ge show ?thesis sorry apply (simp add: in- >>> goal (2 subgoals): 1. ?P ... 2. ¬ ?P... qed t_pow_def2) assume n_ge: "n ≥ 0" Middle Conj. by (metis assms mult_mi- thus ?thesis sorry assume n_lt: "¬ n ≥ 0" nus_right n_ge >>> Successful solve goal: (0 ≤ n) ... next thus ?thesis nat_pow_pow) proof (cases) >>> goal (1 subgoal): 1. ¬ 0 ≤ n ⟹ (x [... assume n_lt: "¬ n ≥ 0" Middle Conj. assume m_ge: "m ≥ 0" have "inv x [^] (nat m * thus ?thesis sorry have "inv x [^] (nat m * nat nat (- n)) = inv x [^] nat >>> Successful solve goal: (¬ 0 ≤ n)... (- n)) = inv x [^] nat (- (m * (- (m * n))" qed >>> No subgoals! Proof Sketch n))" sorry by (metis (full_types) Proof Sketch Middle Conj. show ?thesis sorry m_ge mult_minus_right) Proof Target (theorem/middle conjecture) ... >>>... Proof State proof... Proof Step ...|
|>>> goal (1 subgoal): 1. (x [^] n) [^] m... proof (cases) >>> goal (2 subgoals): 1. ?P ⟹ (x... 2... assume n_ge: "n ≥ 0" thus ?thesis >>> using this: 0 ≤ n goal (1 subgoal):... proof (cases) >>> goal (2 subgoals): 1. 0 ≤ n ⟹ ... assume m_ge: "m ≥ 0" thus ?thesis >>> using this: 0 ≤ m goal (1 subgoal)... using n_ge nat_pow_pow in t_pow_def2 >>> Successful solve goal (m ≥ 0) ... ... qed >>> No subgoals! Complete proof||
**(a) Step-by-step Proof**
**(b) Recursive Proof**
Figure 1: Comparison between the step-by-step proof and the recursive proof. (a) A step-by-step proving
approach ignores the hierarchical structure inherent in the proof, treating it merely as a sequence of proof steps.
The proof cannot be verified as valid until it is fully complete. (b) The recursive proving method decomposes the
structured proof into different levels of verifiable proof sketches. Each proof sketch attempts to prove the target
theorem or conjecture by outlining the primary steps at the current level and postponing the proof of intermediate
conjectures to the next level.
states by executing the generated proof steps within the formal mathematic enviroment. Additionally,
a search algorithm, such as best-first search or Monte Carlo Tree Search (MCTS), is employed to
find a complete path of proof steps. The search algorithm selects the next state to explore based on
heuristics such as the log-probability of the proof step [Polu and Sutskever, 2020, Jiang et al., 2022a,
Yang et al., 2023], value function scores of the proof state [Han et al., 2022, Polu et al., 2022] (in
best-first search), or a PUCT score that combines both [Wang et al., 2023c, Lample et al., 2022] (in
MCTS). These heuristics assess the plausibility or potential value of a given step, helping to prioritize
the most promising actions. However, these scores are approximate, do not ensure the correctness
of the proof direction, and can lead to exploring sub-optimal or distracting subgoals. Even if the
language model is capable enough to produce correct proof steps, the search algorithm, guided by
short-sighted heuristics, often gets trapped exploring a detailed proof of a meaningless intermediate
conjecture. This wastes time and may even cause the algorithm to fail in finding the correct proof
path due to a search timeout. Moreover, as the length of the proof increases in more challenging
problems, the search space expands exponentially. Consequently, the need for an accurate heuristic to
guide the search becomes critical, as a ‘myopic’ step-by-step approach can easily get lost in the vast
expanse of the intermediate proving steps.
To address the aforementioned drawbacks, we propose POETRY, a novel approach that proves the
theorem recursively, level by level. As shown in Figure 1(b), POETRY first searches for a proof sketch,
which is defined to be a verifiable proof outline with the detailed proof of the middle conjecture
replaced by a placeholder tactic, sorry. The sorry tactic signals the formal environment to temporarily
ignore the proof of the middle conjecture, assuming it as resolved. Once a validated proof sketch is
established, POETRY then endeavors to prove the intermediate conjectures that remain unresolved,
also in a recursive, level-by-level manner. This procedure persists until every sorry keyword is
substituted with a valid proof. Notably, the verified sketch at each level may still contain errors.
Since POETRY uses the sorry tactic to skip the proof of intermediate conjectures, these conjectures
might represent false statements and be unprovable. However, they still serve as correct conjectures
to prove the target theorem or conjecture at the current level, resulting in an incorrect proof sketch.
For example, to prove the theorem of the commutative property of addition, a + b = b + a, a false
conjecture such as a = b might be used. However, when actually attempting to prove a = b, we
would never be able to find a valid proof at the next level. If a false proof sketch is generated and
POETRY fails to find the proof for the middle conjecture, it will continue its search to identify a new
proof sketch. Nevertheless, Empirical evidence indicates that verifying and ensuring the correctness
of the proof sketch at each level before delving into deeper proofs significantly enhances performance.
Additionally, we observe a substantial increase in the length of the proofs being able to be generated
by POETRY compared with step-by-step approaches. This recursive methodology is inspired by
human problem-solving techniques, where complex problems are decomposed into manageable
sub-problems, each addressed using a similar recursive strategy. By adopting this approach, POETRY
-----
not only improves the efficiency of its search process but also increases the overall success rate in
discovering valid proofs.
We conduct extensive experiments on the theorem proving datasets miniF2F [Zheng et al., 2021]
and PISA [Jiang et al., 2021] to validate the effectiveness of our proposed approach. POETRY
significantly outperforms previous approaches, achieving a pass rate of 42.2% on both the miniF2F
valid and test datasets, respectively. With a 5.1% absolute improvement on average over the previous
state-of-the-art. Additionally, our ablation study shows that with recursive theorem proving, we obtain
a 3.9% absolute improvement on average compared with step-by-step baselines. Our case study also
reveals that POETRY can find proofs substantially longer compared with sequential step-by-step
proving methods, the maximum proof length increases from 10 to 26 compared to the step-by-step
baseline in the PISA dataset.
**2** **Preliminary**
**2.1** **Formal Mathematic Enviroments**
We choose Isabelle [Paulson, 1994] as our formal environment. It is widely used for formal verification
purposes in academia and industry [Gesellensetter et al., 2008, Klein et al., 2009, Zhang et al., 2024].
It employs a structured proof language called Isar [Wenzel et al., 2004], which facilitates the creation
of human-readable proofs and bridges the gap between formal verification and human understanding.
As illustrated in Figure 1(a), Isabelle processes each proof step (or tactic) and provides feedback.
If the proof step fails to apply, an error message is returned. Otherwise, Isabelle returns a proof
```
state along with a special variable, proof level, indicating the current level after applying the
```
step. In the Isabelle theorem prover, the proof level indicates the depth within a structured proof.
This level increases with commands like have, obtain, and show, which introduce new subgoals
or conjectures in the proof. Conversely, it decreases with commands like by, qed and done, which
conclude a proof block or subgoal.
Isabelle is well-suited for POETRY to accomplish recursive theorem proving. The Isar language
is elegantly structured in a level-by-level format, and it contains proof level that can be easily
used to identify each level. However, the recursive proving technique proposed by POETRY is not
specific to Isabelle; the same framework can be extended to other formal mathematical environments
like Lean [de Moura et al., 2015], Coq [Barras et al., 1997], and HOL [Harrison, 2009], with
additional engineering effort to accommodate the proving strategies. These environments also
provide mechanisms to temporarily skip parts of proofs, similar to Isabelle’s sorry tactic, such
as Lean’s sorry, and Coq’s Admitted. We will leave the extension of POETRY to other formal
environments for future work.
**2.2** **Search-Based Neural Theorem Proving**
Search-based neural theorem proving mostly employs the approach introduced by GPT-f [Polu and
Sutskever, 2020]. In this method, a pre-trained causal language model predicts the next proof step
based on the current proof state and optional context. The language model is trained using data
formatted as follows:
```
INPUT: CONTEXT $(context) GOAL $(proof state) STEP
```
(1)
```
OUTPUT: $(proof step)
```
where $(·) represents the substitution operation, and context denotes the preceding proof steps
leading to the current proof state. At test time, GPT-f employs a best-first search strategy to identify a
sequence of proof steps that solve the problem. Specifically, The proof search algorithm constructs a
tree-like search structure, where each node represents a proof state and each edge represents a proof
step. Starting from the root node, the proof search continuously selects the unexplored node with the
highest score and performs an expansion. The score for each node is the cumulative log probability
of the proof steps that led to the node. During expansion, the language model receives the node’s
proof state and preceding context, then samples e new proof steps. Isabelle subsequently processes
these proof steps, generating new proof states or error messages. The search continues until a proof
is found or the computational budget is exhausted.
-----
**3** **Methodology**
**Algorithm 1 Core data curation process**
1: function EXTRACTPROOFSKETCH(proofLines, index )
2: _▷_ _proofLines: a list of pairs of the format “(proofStep, proofLevel)”_
3: _▷_ _index_ : the starting index in proofLines for processing
4: _currentSketch, allSketches ←_ empty list, empty list
5: _, currentLevel ← _proofLines[index_ ] _▷_ Obtain the current proof level being extracted
6: _proofLevel ←_ _currentLevel_
7: **while proofLevel ≥** _currentLevel do ▷_ Extraction ends after the proof level drops below the current
proof level
8: _proofStep, proofLevel ←_ _proofLines[index_ ]
9: _, nextLevel ← _proofLines[index + 1]_
10: **if nextLevel = currentLevel then**
11: _currentSketch.append(proofStep)_
12: _index ←_ _index + 1_
13: **else if nextLevel > currentLevel then**
14: _currentSketch.append(proofStep + “ sorry")_ _▷_ Replace the next level proof with sorry
15: _deeperSketches, index ←_ EXTRACTPROOFSKETCH(proofLines, index + 1)
16: _allSketches.extend(deeperSketches)_
17: **end if**
18: **end while**
19: _allSketches.append(currentSketch)_
20: **return allSketches, index**
21: end function
**3.1** **Recursive Data Construction**
**Proof sketch extraction. As illustrated in Figure 1(b), to prepare recursive proving data, we need to**
split theorems into blocks of proof sketches. Each proof sketch focuses solely on the target theorem,
conjectures, or subgoals, with the detailed proof of intermediate conjectures or subgoals replaced
by the sorry tactic. Algorithm 1 presents the pseudocode for the sketch data extraction process.
POETRY initially inputs the complete theorem text into Isabelle, which parses it into a sequence of
proof lines, containing proof steps and corresponding proof levels. Subsequently, the list of proof
lines is passed to the ExtractProofSketch function with the index set to 0, initiating the extraction of
all proof sketches. The sketch proof extraction process starts by identifying the current proof level,
which is determined by the level of the proof step at the initial index (Line 5). Proof steps that are
on the same level as the target theorem, conjectures, or subgoals are those that directly focus on
proving the target. Our goal is to retain proof steps with a proof level equal to the current proof level
(Lines 10-12) and replace higher-level proof steps with the sorry tactic (Lines 13-16). We defer the
extraction of higher-level proofs to the recursive call of ExtractProofSketch in Line 15. Finally, the
function will return a list of extracted proof sketches, each containing only the current level of proof,
as illustrated in Figure 1(b).
**Training data construction. Following the extraction of proof sketches, POETRY follows [Jiang**
et al., 2022a] and uses PISA, an interactive environment built on top of Isabelle, to extract proof
states for each proof step. Subsequently, the proof states and proof steps are reformatted into lines in
Equation 1 and used as training examples to fine-tune the language model. Notably, although sorry is
an independent tactic in Isabelle, POETRY integrates the sorry tactic into the preceding proof step
(Line 14 in Algorithm 1). This enables the language model to predict the intermediate conjectures
and the sorry tactic simultaneously. For example, a proof step with the sorry keyword would appear
as have "x + 2 = 2x" sorry. Merging the sorry tactic is crucial to ensure that the language model
generates proof steps at the current level and postpones higher-level proofs using the sorry tactic.
Without this merge, the model must determine the use of sorry solely based on the context and
proof state, which offers no guarantee that the model will generate the necessary sorry after stating a
conjecture or subgoal. This approach ensures that deep-level proofs are deferred correctly.
-----
```
lemma/theorem
HP PL-1
O
HP O
O
P O
```
```
HP HALF-PROVED
F FAILED
|(b) Deeper level 2 subgoal BFS subgoal lemma/theorem F PL-2 HP PL-1 O F F HP O F F P O|(c) Backpropagate state subgoal lemma/theorem F PL-2 O PL-1 O F F HOP O F F P O|(d) Continue search on Level 1 subgoal lemma/theorem F PL-2 HP PL-1 O F F HOP P F F P O P O|
|---|---|---|
||subgoal lemma/theorem F PL-2 O PL-1 O F F HOP O F F P O||
|subgoal lemma/theorem conjecture subgoal lemma/theorem conjecture F PL-2 HP PL-1 P PL-2 F PL-2 P PL-1 P PL-2 F F HOP P O P F F HOP P O P F F P O P O O P F F P O P O O P (e) Deeper level 2 conjecture BFS (f) Backpropagate state and successfully proved|||
```
O
P
HP
F
```
```
Figure 2: A walkthrough example of recursive BFS. Each node in the proof tree is a proof state and each
edge is a proof step. (a) The proof search begins by finding the proof sketch at the first level using BFS. The
search is paused upon identifying a successful proof path, marked with P and HP nodes. This proof path contains
a sorry edge, indicating that it includes skipped conjectures or subgoals that must be addressed in the next level.
**(b) Recursive BFS enters the next level of proof search to attempt to prove the skipped subgoal from the first**
level. Unfortunately, the proof search for this subgoal fails due to a lack of valid nodes to explore, and the search
returns to the first level. (c) After the failed attempt to prove the subgoal, the previously established proof path at
the first level becomes invalid. Consequently, we backpropagate the failure from the second level’s root node up
to the first-level root node, updating all the HP nodes to an O node. (d) At the first level, with the status set to
open for searching proofs, we continued to explore new proof paths. Fortunately, we discovered another proof
path. However, this path also contained a sorry edge with a skipped conjecture that needs to be proved at the
next level. (e) Similar to (b), the recursive BFS proceeds to the next level to search for a proof for the previously
skipped conjecture. It successfully finds a proof path without any "sorry" edges (denoted as P nodes), indicating
that the conjecture has been proven successfully without any skipped intermediate conjectures or subgoals in the
proof path. (f) After finding the sub-level proof, the recursive BFS returns to the first level and backpropagates
the PROVED message to the root, completing the proof.
**3.2** **Recursive Best-First Search**
To prove theorems recursively, POETRY introduces a novel recursive best-first search (recursive
BFS) algorithm to conduct a level-by-level proof search. Figure 2 illustrates a complete walkthrough.
In general, recursive BFS employs the best-first search technique to search for proof sketches at each
level. When a proof sketch is found at a certain level, the algorithm pauses the search at this current
level and then proceeds to the next level to solve the skipped middle conjectures by this current
level. Once all sketches are found and middle conjectures or subgoals are resolved, a complete
proof is achieved. Recursive BFS enhances the generic best-first search to handle multi-level proofs
and ensures that the search can pause and continue across different proof levels, adapting BFS to
dynamically shift between current and subsequent proof layers based on the progress and outcomes
of proof sketches. Below, we will introduce the core elements in the recursive BFS: the sorry edge
and the node status. For the complete updating rules of nodes status and proof search terminate
conditions, please refer to Appendix A.1.
**Sorry edge and node status. As shown in Figure 2(a), each node in the proof tree is a proof state,**
and each edge is a proof step. In a proof state, once a tactic contains a "sorry" keyword (usually
after a conjecture or subgoal), we use a special sorry edge to connect the parent node and the child
node. Then the sorry edge attaches the root node of the next proof level with an unproved conjecture
or subgoal. Such root nodes have a score of 0 and will not be selected in the current-level proof
search. Moreover, we attach each node in the search tree with one of the status labels: OPEN (the
node is open, and no proof has been found so far), FAILED (the node is failed when all potential
subproofs or child nodes stemming from it are unable to establish a valid proo), PROVED (the node
is proven and part of the successful proof), and HALF-PROVED. A HALF-PROVED node means it
belongs to the trajectory that has successfully found a complete proof sketch but contains special
```
sorry edges with unsolved next-level subgoal or mid-conjecture. Only after all the mid-conjectures
```
or subgoals in the sorry edges from the HALF-PROVED node to the PROVED node are proved
will the node be switched to a PROVED node, as illustrated in Figure 2(f).
Using recursive best-first search, POETRY can generate a verifiable proof sketch at each proving
level before proceeding to prove the middle conjectures or subgoals at the next level. In essence,
POETRY breaks down a lengthy proof into manageable, shorter proof sketches, preventing the search
-----
space from expanding exponentially as the proof length increases. This approach allows search-based
methods to find more challenging and longer proofs without necessitating a highly performant value
function model to guide the proof search procedure.
**4** **Experiments**
**4.1** **Experimental Setup**
This section presents our experimental setup, detailing the baselines and evaluation metrics. The
implementation details are covered in Appendix A.2.
**Baseline methods. To fairly compare POETRY with classic step-by-step baselines like GPT-f [Polu**
and Sutskever, 2020, Jiang et al., 2022a], we implement an Isabelle version of GPT-f, denoted as
_GPT-f Baseline. This baseline model is trained on the same dataset as POETRY, with the only_
modification being the removal of all sorry keywords in the proof steps. All hyperparameters and
setups for training and the BFS search are identical to POETRY to ensure a fair comparison.
Notably, the GPT-f Baseline is similar to Thor [Jiang et al., 2022a], except for three main differences.
Firstly, GPT-f Baseline does not use Sledgehammer [Paulson, 2010], nor replace the smt, metis tactic
with <hammer> in the proof step for training. Secondly, GPT-f Baseline fine-tunes a 1.3B parameter
proofGPT [Azerbayev et al., 2023], whereas Thor uses a 700M model pre-trained on The Pile [Gao
et al., 2020]. GPT-f Baseline also uses a newer version of Isabelle which contains more state action
pairs for training (detailed in Section A.3). Thirdly, during the proof search, the GPT-f Baseline
utilizes the beam-search decoding method instead of sampling to generate proof steps for each proof
state.
Aside from the GPT-f Baseline, we also include state-of-the-art search-based neural theorem-proving
methods. PACT [Han et al., 2022], FMSCL [Polu et al., 2022], Leandojo [Yang et al., 2023], and
COPRA [Thakur et al., 2024] are works focusing on the Lean formal environment.[3] Contrastively,
Thor [Jiang et al., 2022a], Thor with expert iteration on auto-formalized data [Wu et al., 2022] and
Thor + Magnushammer [Mikuła et al., 2023] are works done in Isabelle. Moreover, for methods with
LLMs, COPRA is an in-context learning agent that uses GPT-4 [OpenAI, 2023] to generate proof
steps and prove the theorem step by step.
We do NOT compare our methods with LLM-based proving approaches like DSP [Jiang et al., 2022b],
Lyra [Zheng et al., 2023], or LEGO-Prover [Wang et al., 2023b]. These approaches employ generalpurpose large language models (LLMs), such as ChatGPT or GPT-4, which feature several orders
of magnitude more parameters than the models considered in our study. Moreover, these methods
typically utilize proofs in natural language to guide the generation of formal code without searching
and attempting to solve each problem 100 times. In contrast, POETRY provides a performance
evaluation at pass@1, attempting to prove the theorem once for each problem.
**Evaluation datasets and metrics. For evaluation, we use two datasets, the miniF2F dataset [Zheng**
et al., 2021], and the PISA [Jiang et al., 2021]. The miniF2F dataset comprises 488 problems with
varying levels of difficulty, ranging from basic algebra and number theory, originating from the
MATH dataset [Hendrycks et al., 2021], to more challenging problems found in the AIME[4] and
IMO [Daniel Selsam, 2019]. The problems are divided into valid and test sets, with 244 problems
each. The miniF2F dataset only contains problem statements and we only evaluate our method on
this dataset, without any training. The other dataset we adopt is the PISA test set, which comprises
theorems from the Archive of Formal Proofs [MacKenzie et al., 2021] and the Isabelle standard
library [Nipkow et al., 2002]. To better understand how POETRY performs in complex problems
with multiple levels, we subdivided the test set into two subsets: single-level and multi-level. The
PISA single-level subset contains problems with only one level in the ground truth human-written
proofs, whereas the PISA multi-level subset includes problems with multiple proof levels. A more
comprehensive analysis of the PISA dataset is shown in Appendix A.3. For evaluation metrics, we
follow [Jiang et al., 2022a, Yang et al., 2023] and use pass@1 as the evaluation metric, where each
3HTPS [Lample et al., 2022] achieve 57% and 41% on miniF2F valid and test set for pass@64, but they
didn’t provide the pass@1 results. Additionally, the model is fine-tuned on the miniF2F-valid with online
training, which is not a fair comparison with POETRY.
[4https://artofproblemsolving.com/wiki/index.php?title=AIME_Problems_and_Solutions](https://artofproblemsolving.com/wiki/index.php?title=AIME_Problems_and_Solutions)
-----
Table 1: Comparing with baseline. The table displays the pass@1 success rates of the baselines and POETRY,
The highest success rates for each set are highlighted in bold.
Success rate miniF2F-valid miniF2F-test PISA single-level multi-level
Thor w/o sledgehammer 25.0% 24.2% 39.0% - -
GPT-f Baseline 39.3% 37.3% 48.9% **65.5%** 11.1%
_−_ with sampling decoding 30.3% 31.5% 43.2% 57.8% 9.8%
POETRY **42.2%** **42.2%** **49.6%** 65.4% **13.6%**
Table 2: **Comparing with state-of-the-art search-based methods on the miniF2F dataset. The table**
displays the pass@1 success rates of previous works and POETRY, The highest success rates for each set are
highlighted in bold.
Success rate environment miniF2F-valid miniF2F-test
_Baselines_
PACT [Han et al., 2022] Lean 23.9% 24.6%
Leandojo [Yang et al., 2023] Lean - 26.5%
FMSCL [Polu et al., 2022] Lean 33.6% 29.6%
COPRA [Thakur et al., 2024] Lean - 30.7%
Thor [Jiang et al., 2022a] Isabelle 28.3% 29.9%
Thor + expert iteration [Wu et al., 2022] Isabelle 37.3% 35.2%
Thor + Magnushammer [Mikuła et al., 2023] Isabelle 36.9% 37.3%
_Ours_
POETRY Isabelle **42.2%** **42.2%**
theorem in the dataset is proved once by POETRY. Then we calculate the proportion of the theorems
being successfully proven.
**4.2** **Main Results**
**Comparison with language model-only baselines. As shown in Table 1, we compare POETRY**
with baselines that only utilize language models to search for proofs. Thor w/o sledgehammer is
the language model-only version of Thor [Jiang et al., 2022a]. It does not call the sledgehammer
during the proof search. Our reproduced GPT-f Baselines outperform Thor w/o sledgehammer by
13.7% in miniF2F and 10.6% in the PISA test set. This performance boost is mostly due to using the
beam-search decoding strategy during the proof search, as we observe the performance of the GPT-f
Baseline with sampling drops by 6.8% compared with the beam-search version. This is because
the beam-search decoding method is guaranteed to produce e different proof steps for each proof
state, whereas the sampling will produce duplicate proof steps, making the actual number of proof
steps generated per expansion smaller than e. The remaining performance improvements are mostly
contributed by larger model sizes and better pertaining.
Compared with the GPT-f Baseline, we can observe the benefit of the recursive theorem proving.
POETRY outperforms GPT-f Baselines by 3.9% in the miniF2F dataset on average, and 0.7% in the
PISA test set. The modest performance gain observed in the PISA test set is primarily attributed
to the skewed distribution of problem complexity, with the majority of problems containing only a
single proof level (see Table 3). POETRY executes nearly identically to the GPT-f Baseline when
encountering proofs with only one level, resulting in similar performance within the single-level
subset. In contrast, POETRY achieves a 2.5% improvement on the multi-level subset. Furthermore,
POETRY solves a very distinct set of theorems compared with GPT-f Baseline in PISA, with 99 out
of 1109 theorem solved by POETRY can not be proved by GPT-f Baseline, taking up 4.4% in total.
This outcome well supports the effectiveness of our proposed recursive proving method. Moreover,
the gap between step-by-step approaches and POETRY does not end here. The effectiveness of
POETRY will become even more pronounced as the language models are continuously improved and
solve more complex proofs, where the bottleneck caused by searching comes to the fore yet POETRY
is demonstrated effective for searching.
**Comparison with state-of-the-art methods. In Table 2, we illustrate the proportion of successful**
proofs found on the miniF2F dataset. Due to the larger amount of formal data, as well as the help
of hands in ATP like the sledgehammer, the approaches using Isabelle tend to achieve a higher pass
-----
Proof length histogram on miniF2F
Proof length histogram on PISA
|Col1|method GPT-f Baseline POETRY|method GPT-f Baseline POETRY|
|---|---|---|
4 7 10 13 16 19 22 25
method
GPT-f Baseline
POETRY
Proof length
10[3]
10[2]
10[1]
10[0]
method
GPT-f Baseline
POETRY
(a) (b)
Figure 3: Proof length comparison between POETRY and GPT-f Baseline. The y-axis is shown in the log
scale. (a) Proof length’s histogram of found proof in miniF2F dataset. most of the proof found is within 3 steps
long, especially for GPT-f Baselines, but POETRY managed to find longer proof up to 18 proof steps in one
proof. (b) Proof length’s histogram of found proof in the PISA dataset. POETRY discovers a lot more proofs
```
lemma(in UP_cring) n_mult_closed:
assumes "f ∈ carrier P"
shows "n_mult f ∈ carrier P"
proof(rule UP_car_memI[of "deg R f"])
fix n
assume A: "deg R f < n"
show "n_mult f n = 0"
unfolding n_mult_def
proof
```
```
qed Proof level 1
|102 counts 101 Problem 100 igure 3: cale. (a) ong, espe roof. (b) ith longe lemma(in assume shows|Col2|Col3|method GPT-f Baseline POETRY|Col5|Col6|Col7|Col8|method GPT-f Baseline POETRY|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
||||1 3 Proof Proof l cially Proof r proof UP_cr s "f ∈ "n_mul|le en fo le l in c t|5 7 Pr ( ngth c gth’s h r GPT- ngth’s engths. g) n_m arrier f ∈ ca|oo a) o ist f his ul P rr|9 11 f length mparison ogram of Baselines togram t_closed " ier P"||13 15 17 between found pro, but POE of found pr :|PO of i TR oof|
|proof(ru show " unfo usin unfo by ( show " usin unfo by (|||le UP_ ⋀n. de|ca g|r_memI R f <|[o n|f "deg R ⟹ n_mul||f"]) t f n = 0"||
||unfo usin unfo by (||lding g assm lding simp a|n_ s P_ dd|mult_d def : UP_c|ef ar|_memE(2)||) Proof lev|el 2|
||||⋀n. n_|mu|lt f n|∈|carrier||R"||
||usin unfo by (||g assm lding simp a|s n_ dd|mult_d : assm|ef s|cfs_clos||ed) Proof lev|el 2|
```
Proof level 1
```
```
(a)
```
Path 1
```
```
Timeout after 600 seconds
```
```
proof(rule UP_car_memI[of "deg R f"])
show "⋀n. deg R f < n ⟹ n_mult f n = 0"
```
```
Path 2
```
```
Never explored
```
⋀
(b)
Figure 4: Case comparison between POETRY and GPT-f Baseline. (a) Recursive proof found by POETRY
in 71.2 seconds, the proof contains two proof levels. (b) Failure-proof paths found by the GPT-f Baseline. GPT-f
Baseline failed to find proof due to timeout after 600 seconds. We select two different failure proof paths found
by GPT-f Baseline.
rate compared with approaches using Lean environments. Our proposed POETRY significantly
outperforms all such approaches. POETRY outperforms Thor+Magnushammer by 5.1% on average,
the highest performance on the miniF2F dataset with the search-based method at pass@1.
Notably, the recursive proving method is orthogonal to these baseline approaches. It can be further
improved with the use of Sledgehammer or Magnushammer [Jiang et al., 2022a, Mikuła et al.,
2023], running expert iteration on the training set [Polu et al., 2022, Wu et al., 2022], using retrieval
augmented proof step generation techniques [Yang et al., 2023], or even better search algorithm in
each level [Wang et al., 2023c, Lample et al., 2022]. As these are not the focus of the current paper,
we leave the integration for future work.
**4.3** **Analysis**
**Can POETRY find longer proof? Figure 3 compares the proof length of proofs discovered by the**
GPT-f Baseline and POETRY in both the miniF2F dataset and the PISA test set. It can be observed
that the proof lengths found by POETRY are longer than those found by the GPT-f Baseline. The
average proof length increases from 1.47 to 2.13 in the miniF2F dataset and 1.62 to 2.09 in the
PISA test set. Prominently, the maximum proof length increases from 3 to 18 compared with the
GPT-f Baselines in the miniF2F dataset, and from 10 to 26 in the PISA test set. This proof length is
unattainable without a recursive proving method. By comparison, the maximum proof length found
by Leandojo in the miniF2F test set is 4, with an average proof length of 1.35. Therefore, it’s evident
that POETRY expands the possibility of discovering longer proofs and addressing more challenging
problems.
-----
**Case study. As illustrated in Figure 4, we compare the proof found by the POETRY with the failed**
attempts by the GPT-f Baseline. The theorem n_mult_closed states that if a polynomial f belongs
to the carrier set of polynomials P, then the operation n_mult applied to f results in a polynomial
that also belongs to P . As shown in Figure 4(a), the proof found by the POETRY contains two
levels, marked with different shades of blue. The first level is completed by first showing two main
properties: (i) Zero polynomial condition (the first show statement in Line 2): For any integer n
greater than the degree of f, n_mult fn must be zero. (ii) Closure under carrier (the second show
statement in Line 7): For any integer n, the result n_mult fn must be within the carrier set R. When
proving the first level, the detailed proof of these two properties will be skipped with the sorry tactic.
After the first level of the proof has been found, POETRY searches for the proof of these properties
one by one in the next level. In contrast, the GPT-f Baseline failed to find valid proof for this problem,
resulting in a search timeout after reaching 600 seconds of time limit. Two failure search trajectories
are selected and shown in Figure 4(b). For proof path 1, the proof searches astray and tries to utilize
a more complex way to prove the first property, resulting in a timeout. The GPT-f Baseline also
identified the first two steps in POETRY’s proof. However, this proof path never had the chance
to be further explored before the timeout occurred. From this case, we can see that by recursively
proving the theorem, the proof with 11 steps is broken down into 3 proof sketches with a maximum
length of 4. Therefore, POETRY effectively prevents the proof search from wasting too much time
on searching for useless mid-step conjectures.
**5** **Related Works**
**Search-based neural theorem proving. Our work is closely related to prior work on step-by-**
step search-based nerual theorem proving. GPT-f [Polu and Sutskever, 2020] is the first to apply
transformer-based language models to generate single-step action for theorem proving in Metamath.
With the ability to generate arbitrary proof text, modern ATPs advance drastically and are capable of
proving theorems in complex ITPs like Lean [de Moura et al., 2015] or isabelle [Paulson, 1994]. The
follow-up work PACT [Han et al., 2022] proposes auxiliary pre-training tasks for action-generating
language models. [Polu et al., 2022] uses expert iteration and syntactic data to bootstrap the language
model’s performance. Most recently, HTPS [Lample et al., 2022] plugs in Monte-Carlo Tree
Search [Silver et al., 2016] in this framework and applies an online version of expert iteration. DTSolver [Wang et al., 2023c] improves HTPS by enabling backtracking during proof search, increasing
the robustness. LeanDojo [Yang et al., 2023] retrieve possible premise to assist the generation of
a single proof step. Lisa and Thor [Jiang et al., 2021, 2022a] tackle theorem proving in Isabelle,
which combines traditional ATPs and language models to suggest proof steps, in a neuro-symbolic
way. All theorem-proving method introduced above proves theorems step-by-step, with short-sighted
heuristics guiding the search to find a correct proof path.
**Nerual theorem proving with a large language model. Another popular paradigm for automated**
theorem proving resorts to large pre-trained language models for proof context generation in an incontext-learning manner, without finetuning on formal mathematic datasets. DSP [Jiang et al., 2022b]
uses OpenAI codex LLM [Chen et al., 2021] to generate the entire proofs guided by informal proof.
It suffers from hallucination problems with LLM and requires multiple attempts for each problem to
ensure correctness. Lyra [Zheng et al., 2023] improves on DSP and uses GPT-4’s auto-correction
ability to correct previous error attempts. Blualr [First et al., 2023] also uses Minerva [Lewkowycz
et al., 2022] for whole proof generation using the initial theorem statement. To prevent hallucination,
Balar finetunes a small model that uses error messages to correct the generated faulty proof. MUSTARD [Huang et al., 2024] generates the problem and the solution concurrently with ChatGPT and
uses Lean as a verifier to check the correctness of the generated content.
**Subgoal-based AI agents. Another domain that is closely related to our paper is subgoal-based**
AI agents [Wang et al., 2023b,a, Wei et al., 2023]. These agents decompose the major tasks into
small sub-objectives and tackle them one by one. However, most AI agents do not focus on formal
mathematic problems, which require compiling the rules of formal environments. LEGO-Prover
[Wang et al., 2023b] approaches the theorem-proving problem by decomposing the target into
subgoal lemmas and building the proof block by block. However, not all the subgoals can be easily
decomposed into lemmas. Many mid-conjectures or subgoals are specific to the current problem
and involve shared variables defined in the previous proving process, making them unsuitable for
extraction as lemmas, or sometimes impossible to extract as lemmas. kSubS [Czechowski et al.,
-----
2024] utilizes a subgoal generation model to produce mid-step proof states and employs a policy
model to generate paths in between. However, the generated proof must adhere to the generated
proof states, thus the method cannot be applied to more complex real-world datasets like miniF2F.
Moreover, the proposed subgoal generator constrains the ability of the policy model to explore freely
and find solutions beyond predefined subgoals.
**6** **Limitations**
The proposed method proves theorems recursively by producing a verifiable proof sketch at each
level. Although this leads to consistent performance improvements, there is no theoretical guarantee
that it will avoid the problem of infinite action space for each proof step generation and the problem
of exponential search space with respect to the depth of the search tree. Furthermore, applying the
framework of POETRY to other formal languages such as Lean or Coq is straightforward but would
require a non-neglectable amount of engineering efforts on some language-specific aspects.
**7** **Conclusion**
In this work, we introduce a novel theorem-proving method, POETRY, which recursively proves the
theorem in a level-by-level manner. POETRY searches for a verifiable proof sketch in each level,
focusing on proving the target theorem, conjecture, or subgoals in the current level, and utilizes a
special sorry tactic to defer detailed proofs of mid-conjectures or subgoals. POETRY introduces
a fundamentally different theorem-proving paradigm to the community, preventing short-sighted
proof searches that easily go astray. The recursive dataset decomposes long proofs into short proof
sketches within a tractable search space. Extensive experiments show that POETRY can indeed
improve the pass rates on the miniF2F dataset and PISA test set, and can find longer proofs compared
to step-by-step approaches.
**References**
Z. Azerbayev, B. Piotrowski, H. Schoelkopf, E. W. Ayers, D. Radev, and J. Avigad. Proofnet:
Autoformalizing and formally proving undergraduate-level mathematics, 2023. 6, 14
B. Barras, S. Boutin, C. Cornes, J. Courant, J.-C. Filliâtre, E. Giménez, H. Herbelin, G. Huet,
C. Muñoz, C. Murthy, C. Parent, C. Paulin-Mohring, A. Saïbi, and B. Werner. The Coq Proof
_[Assistant Reference Manual : Version 6.1. report, INRIA, May 1997. URL https://hal.inria.](https://hal.inria.fr/inria-00069968)_
```
fr/inria-00069968. Pages: 214. 3
```
M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda,
N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin,
B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet,
F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss,
A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse,
A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage,
M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and
W. Zaremba. Evaluating large language models trained on code. CoRR, abs/2107.03374, 2021.
[URL https://arxiv.org/abs/2107.03374. 9](https://arxiv.org/abs/2107.03374)
K. Czechowski, T. Odrzygó´zd´z, M. Zbysi´nski, M. Zawalski, K. Olejnik, Y. Wu, Łukasz Kuci´nski,
and P. Miło´s. Subgoal search for complex reasoning tasks, 2024. 9
R. B. e. a. Daniel Selsam, Kevin Buzzard. Imo grand challenge. 2019. [URL https://](https://imo-grand-challenge.github.io/)
```
imo-grand-challenge.github.io/. 6
```
L. de Moura, S. Kong, J. Avigad, F. van Doorn, and J. von Raumer. The Lean Theorem Prover (System
Description). In A. P. Felty and A. Middeldorp, editors, Automated Deduction - CADE-25, Lecture
Notes in Computer Science, pages 378–388, Cham, 2015. Springer International Publishing. ISBN
978-3-319-21401-6. doi: 10.1007/978-3-319-21401-6_26. 3, 9
-----
J. Devlin, M. Chang, K. Lee, and K. Toutanova. BERT: pre-training of deep bidirectional transformers
for language understanding. In J. Burstein, C. Doran, and T. Solorio, editors, Proceedings of the
_2019 Conference of the North American Chapter of the Association for Computational Linguistics:_
_Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019,_
_Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics,_
[2019. doi: 10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423. 14](https://doi.org/10.18653/v1/n19-1423)
[R. Dunford, Q. Su, and E. Tamang. The pareto principle. The Race, 2021. URL https://api.](https://api.semanticscholar.org/CorpusID:15925174)
```
semanticscholar.org/CorpusID:15925174. 15
```
E. First, M. N. Rabe, T. Ringer, and Y. Brun. Baldur: Whole-proof generation and repair with
large language models. CoRR, abs/2303.04910, 2023. doi: 10.48550/arXiv.2303.04910. URL
```
https://doi.org/10.48550/arXiv.2303.04910. 9
```
L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite,
N. Nabeshima, S. Presser, and C. Leahy. The pile: An 800gb dataset of diverse text for language modeling, 2020. 6
L. Gesellensetter, S. Glesner, and E. Salecker. Formal verification with isabelle/hol in practice:
finding a bug in the gcc scheduler. In Formal Methods for Industrial Critical Systems: 12th
_International Workshop, FMICS 2007, Berlin, Germany, July 1-2, 2007, Revised Selected Papers_
_12, pages 85–100. Springer, 2008. 3_
J. M. Han, J. Rute, Y. Wu, E. W. Ayers, and S. Polu. Proof artifact co-training for theorem proving
with language models. In The Tenth International Conference on Learning Representations, ICLR
_[2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022. URL https://openreview.](https://openreview.net/forum?id=rpxJc9j04U)_
```
net/forum?id=rpxJc9j04U. 1, 2, 6, 7, 9
```
J. Harrison. HOL light: An overview. In S. Berghofer, T. Nipkow, C. Urban, and M. Wenzel,
editors, Theorem Proving in Higher Order Logics, 22nd International Conference, TPHOLs
_2009, Munich, Germany, August 17-20, 2009. Proceedings, volume 5674 of Lecture Notes in_
_Computer Science, pages 60–66. Springer, 2009. doi: 10.1007/978-3-642-03359-9\_4. URL_
```
https://doi.org/10.1007/978-3-642-03359-9_4. 3
```
D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt.
Measuring mathematical problem solving with the math dataset, 2021. 6
Y. Huang, X. Lin, Z. Liu, Q. Cao, H. Xin, H. Wang, Z. Li, L. Song, and X. Liang. Mustard: Mastering
uniform synthesis of theorem and proof data, 2024. 1, 9
A. Q. Jiang, W. Li, J. M. Han, and Y. Wu. Lisa: Language models of isabelle proofs. 2021. 1, 3, 6, 9,
15
A. Q. Jiang, W. Li, S. Tworkowski, K. Czechowski, T. Odrzygó´zd´z, P. Miło´s, Y. Wu, and M. Jamnik.
Thor: Wielding hammers to integrate language models and automated theorem provers. arXiv
_preprint arXiv:2205.10893, 2022a. 1, 2, 4, 6, 7, 8, 9, 15_
A. Q. Jiang, S. Welleck, J. P. Zhou, W. Li, J. Liu, M. Jamnik, T. Lacroix, Y. Wu, and G. Lample. Draft,
sketch, and prove: Guiding formal theorem provers with informal proofs. CoRR, abs/2210.12283,
[2022b. doi: 10.48550/arXiv.2210.12283. URL https://doi.org/10.48550/arXiv.2210.](https://doi.org/10.48550/arXiv.2210.12283)
```
12283. 1, 6, 9
```
G. Klein, K. Elphinstone, G. Heiser, J. Andronick, D. Cock, P. Derrin, D. Elkaduwe, K. Engelhardt,
R. Kolanski, M. Norrish, et al. sel4: Formal verification of an os kernel. In Proceedings of the
_ACM SIGOPS 22nd symposium on Operating systems principles, pages 207–220, 2009. 3_
G. Lample, M.-A. Lachaux, T. Lavril, X. Martinet, A. Hayat, G. Ebner, A. Rodriguez, and T. Lacroix.
HyperTree Proof Search for Neural Theorem Proving. Technical Report arXiv:2205.11491, arXiv,
[May 2022. URL http://arxiv.org/abs/2205.11491. arXiv:2205.11491 [cs] type: article. 1,](http://arxiv.org/abs/2205.11491)
2, 6, 8, 9
-----
A. Lewkowycz, A. Andreassen, D. Dohan, E. Dyer, H. Michalewski, V. V. Ramasesh,
A. Slone, C. Anil, I. Schlag, T. Gutman-Solo, Y. Wu, B. Neyshabur, G. GurAri, and V. Misra. Solving quantitative reasoning problems with language models.
In NeurIPS, 2022. [URL http://papers.nips.cc/paper_files/paper/2022/hash/](http://papers.nips.cc/paper_files/paper/2022/hash/18abbeef8cfe9203fdf9053c9c4fe191-Abstract-Conference.html)
```
18abbeef8cfe9203fdf9053c9c4fe191-Abstract-Conference.html. 9
```
C. Liu, J. Shen, H. Xin, Z. Liu, Y. Yuan, H. Wang, W. Ju, C. Zheng, Y. Yin, L. Li, et al. Fimo: A
challenge formal dataset for automated theorem proving. arXiv preprint arXiv:2309.04295, 2023.
1
C. MacKenzie, J. Fleuriot, and J. Vaughan. An evaluation of the archive of formal proofs, 2021. 6
M. Mikuła, S. Antoniak, S. Tworkowski, A. Q. Jiang, J. P. Zhou, C. Szegedy, Ł. Kuci´nski, P. Miło´s,
and Y. Wu. Magnushammer: A transformer-based approach to premise selection. arXiv preprint
_arXiv:2303.04488, 2023. 6, 7, 8_
T. Nipkow, M. Wenzel, and L. C. Paulson. Isabelle/HOL: a proof assistant for higher-order logic.
Springer-Verlag, Berlin, Heidelberg, 2002. ISBN 3540433767. 6
OpenAI. GPT-4 technical report. CoRR, abs/2303.08774, 2023. doi: 10.48550/arXiv.2303.08774.
[URL https://doi.org/10.48550/arXiv.2303.08774. 6](https://doi.org/10.48550/arXiv.2303.08774)
L. C. Paulson. Isabelle a Generic Theorem Prover. Springer Verlag, 1994. 3, 9
L. C. Paulson. Three years of experience with sledgehammer, a practical link between automatic and
interactive theorem provers. In R. A. Schmidt, S. Schulz, and B. Konev, editors, Proceedings of the
_2nd Workshop on Practical Aspects of Automated Reasoning, PAAR-2010, Edinburgh, Scotland,_
_UK, July 14, 2010, volume 9 of EPiC Series in Computing, pages 1–10. EasyChair, 2010. doi:_
[10.29007/TNFD. URL https://doi.org/10.29007/tnfd. 6](https://doi.org/10.29007/tnfd)
S. Polu and I. Sutskever. Generative language modeling for automated theorem proving. CoRR,
[abs/2009.03393, 2020. URL https://arxiv.org/abs/2009.03393. 1, 2, 3, 6, 9](https://arxiv.org/abs/2009.03393)
S. Polu, J. M. Han, K. Zheng, M. Baksys, I. Babuschkin, and I. Sutskever. Formal Mathematics
Statement Curriculum Learning. (arXiv:2202.01344), Feb. 2022. doi: 10.48550/arXiv.2202.01344.
[URL http://arxiv.org/abs/2202.01344. arXiv:2202.01344 [cs] type: article. 1, 2, 6, 7, 8, 9](http://arxiv.org/abs/2202.01344)
D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser,
I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner,
I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the
game of Go with deep neural networks and tree search. Nature, 529(7587):484–489, Jan. 2016.
[ISSN 1476-4687. doi: 10.1038/nature16961. URL https://www.nature.com/articles/](https://www.nature.com/articles/nature16961)
```
nature16961. Number: 7587 Publisher: Nature Publishing Group. 9
```
A. Thakur, G. Tsoukalas, Y. Wen, J. Xin, and S. Chaudhuri. An in-context learning agent for formal
theorem-proving, 2024. 1, 6, 7
G. Wang, Y. Xie, Y. Jiang, A. Mandlekar, C. Xiao, Y. Zhu, L. Fan, and A. Anandkumar. Voyager: An
open-ended embodied agent with large language models. arXiv preprint arXiv: Arxiv-2305.16291,
2023a. 9
H. Wang, H. Xin, C. Zheng, L. Li, Z. Liu, Q. Cao, Y. Huang, J. Xiong, H. Shi, E. Xie, J. Yin, Z. Li,
H. Liao, and X. Liang. Lego-prover: Neural theorem proving with growing libraries, 2023b. 1, 6, 9
H. Wang, Y. Yuan, Z. Liu, J. Shen, Y. Yin, J. Xiong, E. Xie, H. Shi, Y. Li, L. Li, et al. Dt-solver:
Automated theorem proving with dynamic-tree sampling guided by proof-level value function. In
_Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume_
_1: Long Papers), pages 12632–12646, 2023c. 1, 2, 8, 9_
J. Wei, X. Wang, D. Schuurmans, M. Bosma, B. Ichter, F. Xia, E. Chi, Q. Le, and D. Zhou. Chain-ofthought prompting elicits reasoning in large language models, 2023. 9
M. Wenzel et al. The isabelle/isar reference manual, 2004. 3
-----
Y. Wu, A. Q. Jiang, W. Li, M. Rabe, C. Staats, M. Jamnik, and C. Szegedy. Autoformalization with
large language models. Advances in Neural Information Processing Systems, 35:32353–32368,
2022. 6, 7, 8
J. Xiong, J. Shen, Y. Yuan, H. Wang, Y. Yin, Z. Liu, L. Li, Z. Guo, Q. Cao, Y. Huang, C. Zheng,
X. Liang, M. Zhang, and Q. Liu. Trigo: Benchmarking formal mathematical proof reduction for
generative language models, 2023. 1
K. Yang, A. M. Swope, A. Gu, R. Chalamala, P. Song, S. Yu, S. Godil, R. Prenger, and A. Anandkumar. Leandojo: Theorem proving with retrieval-augmented language models. arXiv preprint
_arXiv:2306.15626, 2023. 1, 2, 6, 7, 8, 9_
L. Zhang, S. Lu, and N. Duan. Selene: Pioneering automated proof in software verification, 2024. 3
C. Zheng, H. Wang, E. Xie, Z. Liu, J. Sun, H. Xin, J. Shen, Z. Li, and Y. Li. Lyra: Orchestrating dual
correction in automated theorem proving, 2023. 6, 9
K. Zheng, J. M. Han, and S. Polu. miniF2F: a cross-system benchmark for formal Olympiad-level
[mathematics. Sept. 2021. URL https://openreview.net/forum?id=9ZPegFuFTFv. 3, 6](https://openreview.net/forum?id=9ZPegFuFTFv)
-----
**A** **More Details on POETRY**
The outline of the Appendix A is as follows:
- More details on our proposed recursive best-first search.
- Implementation details for POETRY, including the hyperparameters for the methods and
machine configuration.
- More details on the newly extracted PISA dataset, and additional analysis of the statistics
and characteristics of the dataset.
Appendix B discusses the broader impacts of POETRY.
Appendix C includes more examples of found theorems by POETRY.
**A.1** **Details on Recursive BFS**
In this section, we discuss more details on the recursive BFS algorithm. Section 3.2 only introduces
the overall process of how recursive BFS runs, and no detailed introduction on status update rules,
pause and continue of the recursive BFS and terminate conditions. We discuss each in detail below.
**Status update rules. The status update happens whenever a node finishes its expansion, which adds**
all the newly created nodes as children. The update will propagate from the expanded node all the
way to the root node. A node’s status will be marked as FAILED if all the children are FAILED,
and a node will be marked as PROVED or HALF-PROVED if any of its children is PROVED or
HALF-PROVED. Additionally, when POETRY encounters a sorry edge there exists special update
rules. If a node is connected to a PROVED node with a sorry edge, and the next level root node
is OPEN, this means the mid-conjectures/subgoals represented by the sorry edge have not been
proved, the node will be marked as HALF-PROVED (Figure 2(a)). As illustrated in Figure 2(c), if
the sub-root node has failed, the original HALFPROVE status of the node will be updated to OPEN.
And if the sub-root node is PROVED, the original HALFPROVE status of the node will be marked as
PROVED (Figure 2(f)).
**Pause and continue of recursive BFS. Figure 2(a)-(d) illustrates the pause and continue of recursive**
BFS. During the proof search, whenever a proof sketch is found (Figure 2(a)), the current level of the
best-first search will be paused, and POETRY will find the last unproved sorry edge and recursively
call the best-first search algorithm to find the proof for the next-level root node attached in the sorry
edge (Figure 2(b)). If the next-level best-first search fails to find proof for the sub-root node (The
status of the sub-root node is marked as FAILED), POETRY will update the status of the search
tree and continue the paused best-first search for the current level and try to find new proof sketches
(Figure 2(c)-(d)).
**Terminate conditions. The proof search of the current level will terminate and return to the upper-**
level proof search under these scenarios: 1) A complete proof for this level is found, which means
all the middle conjectures or subgoals have been proven by the deeper levels of proof searches,
recursively. The root node status of the current level proof search is marked as PROVED. 2) For
proof search in a level higher than 1, a timeout of 120s has been reached. The root node status will
be marked as FAILED. 3) All the nodes in the proof tree have been explored and no proof has been
found, the root node status will also be marked as FAILED. Additionally, a global timeout of 600s is
added to the entire recursive BFS, ensuring each theorem will not be searched longer than 5 minutes.
We can finally obtain complete proof for the target theorem after the first level best-first search return
as proved, as shown in Figure 2(f).
**A.2** **Implementation Details**
In this work, we use a decoder-only transformer [Devlin et al., 2019] architecture pre-trained with
proof-pile v1.1 dataset [Azerbayev et al., 2023], with 1.3b parameters, 24 layers, 16 attention heads,
a hidden dimension of 2048, and a GPT-2 tokenizer with 50400 vocabulary size. We use the alpaca[5]
codebase for finetuning the model on our recursive dataset. During fine-tuning, we use a global batch
size of 256 with 3500 steps of warmup using the AdamW optimizer. We use the cosine scheduling
[5https://github.com/tatsu-lab/stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca)
-----
Table 3: Dataset statistics. The table displays the dataset statistics for our newly extracted PISA dataset based
on Isabelle 2022.
train valid test single-level multi-level
Number of theorems 236, 978 2, 347 2, 236 1, 558 681
Number of proof steps 3, 018, 407 27, 419 27, 653 3, 982 23, 671
Average proof length 12.7 11.7 11.8 2.4 33.5
Maximum proof length 10, 320 1, 236 1, 079 204 1079
Average proof level 1.5 1.5 1.5 1.0 2.6
Maximum proof level 26 9 10 1 10
strategy with a maximum learning rate of 3e − 4 and a minimum learning rate of 3e − 5. Our model
is finetuned with 100, 000 steps training budgets and inferences using the lowest validation loss
checkpoints with early stopping.
For the configuration of recursive best-first search. We use a global timeout of 600 seconds; each
proof step has a timeout limit of 10 seconds. The number of samples per expansion e is set to 32,
and we use beamsearch decoding strategies to sample proof steps from the language model. The
maximum number of steps for expansion is set to 128, and the maximum recursive depth for searching
deeper level proof is set to 10. For proof searches other than the first level, a local timeout of 120
seconds is also applied.
**Machine configuration. We use Nvidia A800 GPU with 80GB of GPU memory for fine-tuning. The**
training server has 104 CPU cores and 1024GB of CPU memory. The finetuning takes around 100
GPU hours and requires an additional 50 GPU hours to run a single evaluation on the miniF2F test
set.
**A.3** **Dataset Details**
In this section, we further discuss the details of our newly extracted PISA dataset, including the
dataset statistics and other interesting aspects of the dataset.
**Dataset Statistics. We follow [Jiang et al., 2021, 2022a] and extract data from Isabelle 2022, as well**
as the corresponding version of the Archive of Formal Proof library[6]. We provide detailed statistics
for our fine-tuning dataset. As shown in Table 3, the newly constructed PISA dataset contains 3.02
million proof steps in the training data. In contrast, the old PISA dataset extracted by LISA[Jiang
et al., 2021] only contains 2.49 million proof steps. Another interesting factor of the dataset statistics
is the two subsets of the PISA test. The single-level test set contains 2/3 of the problems in the test
set, but only 14% of the proof step. Whereas the multi-level subset contains the remaining 86% proof
steps.
**How recursive the dataset is? As illustrated in Figure 5(a), the figure shows the histogram of the**
number of proof levels in a single theorem against the number of theorems. As expected most of
the theorem in the training dataset only contains one proof level, which does not require recursive
proving at all. This result matches the Pareto principle [Dunford et al., 2021] where the majority
of the problems are simple and could be tackled without the recursive proving technique. However,
it’s the challenging problems that are of most interest to us, where they can test the boundary of our
method’s actual proving ability.
**How much does the search space shrink by proving the theorem recursively? As the verified**
proof sketches might not always be correct due to mid-conjectures/subgoals’ proof being skipped
by sorry, we can not accurately calculate the search space is shrunk to which extent. However, we
can have a lower bound search space calculated by the ground truth proof. Figure 5(b) shows the
histogram of the number of proof steps that need to be completed until a proof/proof sketch can
pass the verification of Isabelle against the number of these proofs. For conventional step-by-step
approaches, the proof is the original one, and for POETRY, the proof is a proof sketch. We can
observe that the proof length of POETRY is substantially shifted towards a shorter proof length
per proof. On average, there are 3.3 proof steps for POETRY and 12.7 proof steps for step-by-step
6The original dataset is extracted with Isabelle 2021, resulting in x fewer theorems and x fewer lines of state
action pair.
-----
Proof level histogram on PISA
10[5]
10[4]
10[3]
10[2]
Problem counts
10[1]
10[0]
1 4 7 10 13 16 19 22 25
Proof level
Proof/sketch length histogram on PISA
Data format
10[5] Original
Recursive
10[4]
10[3]
Problem counts10[2]
10[1]
1 20 40 60 80 >100
Proof/sketch length
(a)
(b)
Figure 5: Distribution of proof level and proof length in PISA dataset. (a) Histogram of proof level in the
PISA training set. The maximum proof level can reach 26 (b) Comparison between the number of steps in the
original proof and the extracted proof sketches. By breaking the original proof into proof sketches, the proof
length is reduced substantially.
baseline per proof. And that would be 32[9][.][4] times smaller the search space per proof on average with
_e = 32._
**B** **Broader Impact**
The research presented in this paper has the potential to advance automated theorem proving, AI for
Math and software engineering. The advancement can enhance the capabilities of large language
models in formal theorem proving, contributing to more reliable mathematical proof verification and
providing valuable educational resources for students and researchers. By directly releasing the code,
model and data, we aim to ensure the responsible use of our work, fostering further innovation and
maintaining high standards of data privacy and intellectual property compliance.
We anticipate no foreseeable negative societal impacts of this work.
**C** **Examples of Found Theorem by POETRY**
**C.1** **Examples in miniF2F Dataset**
```
theorem amc12b_2020_p6:
fixes n :: nat
assumes h0: "9 \<le> n"
shows "\<exists>x::nat. (real_of_nat x)^2 = (fact (n + 2)
- fact (n + 1)) / fact n"
proof from assms
show?thesis
unfolding power2_eq_square
by (intro exI[of _ "n + 1"]) (auto simp: field_simps)
qed
```
-----
```
theorem mathd_algebra_422:
fixes x :: real and \<sigma>::"real \<Rightarrow> real"
assumes "bij \<sigma>"
and \<sigma>:"\<forall> x. \<sigma> x = 5 * x - 12"
and "\<sigma> (x + 1) = (Hilbert_Choice.inv \<sigma>) x"
shows "x = 47 / 24"
proof from assms
have "bij \<sigma>"
by (auto intro!: bijI simp: bij_def)
show?thesis
proof (rule ccontr)
assume "x \<noteq> 47/24"
thus False
using assms
by (subst (asm) bij_inv_eq_iff) auto
qed
qed
theorem mathd_algebra_441:
fixes x :: real
assumes "x \<noteq> 0"
shows "12 / (x * x) * (x^4 / (14 * x)) * (35 / (3 * x)) = 10"
proof from assms
show?thesis
apply (simp add: divide_simps)
apply algebra
by (simp add: power4_eq_xxxx power2_eq_square)
qed
```
-----
```
theorem mathd_algebra_487:
fixes a b c d :: real
assumes "b = a^2"
and "a + b = 1"
and "d = c^2"
and "c + d = 1"
and "a \<noteq> c"
shows "sqrt ((a - c)^2 + (b - d)^2)= sqrt 10"
proof (rule real_sqrt_unique)
show "(sqrt 10)\<^sup>2 = (a - c)\<^sup>2 + (b - d)\<^sup>2"
proof let?r = real_of_rat
show?thesis
proof (rule power2_eq_imp_eq)
show "((sqrt 10)\<^sup>2)\<^sup>2 = ((a - c)\<^sup>2 + (b - d)\<^sup>2)
\<^sup>2"
proof from assms
show?thesis
unfolding power2_eq_square
apply simp
apply (auto simp: field_simps)
by sos
qed
qed (auto simp: algebra_simps)
qed
qed (simp add: power2_eq_square)
```
**C.2** **Examples in PISA Dataset**
```
lemma rev_morphs: "two_binary_morphisms (rev_map g) (rev_map h)"
proof
show "rev_map g (u \<cdot> v) = rev_map g u \<cdot> rev_map g v" for u v
proof (simp add: rev_map_def)
show "rev (g (rev v \<cdot> rev u)) = rev (g (rev u)) \<cdot> rev (g (rev
v))"
using swap
by (simp add: g.morph)
qed
show "rev_map h (u \<cdot> v) = rev_map h u \<cdot> rev_map h v" for u v
proof (simp add: rev_map_def)
show "rev (h (rev v \<cdot> rev u)) = rev (h (rev u)) \<cdot> rev (h (rev
v))"
using swap
by (simp add: h.morph)
qed
qed
```
-----
```
lemma lset_iterates:
"lset (iterates f x) = {(f ^^ n) x|n. True}"
proof
show "lset (iterates f x) \<subseteq> {(f ^^ n) x |n. True}"
proof(cases "x \<in> lset (iterates f x)")
case True
thus?thesis
by(auto simp add: in_lset_conv_lnth)
next
case False
thus?thesis
by (auto simp: in_lset_conv_lnth)
qed
show "{(f ^^ n) x |n. True} \<subseteq> lset (iterates f x)"
proof safe
fix n
show "(f ^^ n) x \<in> lset (iterates f x)"
proof(induct n arbitrary: x)
case 0
thus?case
by(subst iterates) simp
next
case Suc
thus?case
by(subst iterates)(simp add: o_def funpow_swap1)
qed
qed
qed
lemma neg_distr_cond_bset_eq: "neg_distr_cond_bset (=) (=) tytok"
unfolding neg_distr_cond_bset_def
apply(rule predicate2I)
apply transfer
subgoal for A B
apply(rule bexI[where x=B])
subgoal
apply safe
subgoal
unfolding rel_set_OO
by(auto simp add: rel_set_def OO_def)
subgoal
unfolding rel_set_OO
by(auto simp add: rel_set_def OO_def)
done
by(simp)
done
```
-----
```
lemma frag_cmul_distrib: "frag_cmul (c+d) a = frag_cmul c a + frag_cmul d a"
proof show?thesis
proof (rule poly_mapping_eqI)
fix x
show "lookup (frag_cmul (c + d) a) x = lookup (frag_cmul c a + frag_cmul
d a) x"
proof (cases "x \<in> keys a")
case True
thus?thesis
unfolding lookup_add
using lookup_frag_cmul
by (auto simp: algebra_simps)
qed (auto simp: in_keys_iff lookup_add in_keys_iff)
qed
qed
lemma SETId: assumes "x |\<in>| X" shows "(Id SET X) |@| x = x"
proof have "x \<in> Obj (Op SET)"
using assms
apply (simp add: OppositeCategory_def)
by(simp add: SET_def SET’_def MakeCat_def)
thus?thesis
proof assume 1: "x \<in> obj\<^bsub>Op SET\<^esub>"
show?thesis
proof(simp add: SET_def)
show "id\<^bsub>MakeCat SET’\<^esub> X |@| x = x"
proof(cases "x |\<in>| X")
case True
thus?thesis
apply(simp add: SET’_def)
apply (simp add: MakeCat_def)
by(rule ZFfunApp)
qed (simp add: assms)
qed
qed
qed
```
-----
```
lemma (in encoding_wrt_barbs)
indRelRSTPO_impl_SRel_and_TRel_weakly_reflect_barbs:
fixes SRel :: "(’procS \<times> ’procS) set"
and TRel :: "(’procT \<times> ’procT) set"
assumes reflection: "rel_weakly_reflects_barbs (indRelRSTPO SRel TRel) (
STCalWB SWB TWB)"
shows "rel_weakly_reflects_barbs SRel SWB"
and "rel_weakly_reflects_barbs TRel TWB"
proof have "rel_weakly_reflects_barbs SRel SWB \<and> rel_weakly_reflects_barbs
TRel TWB"
proof (rule conjI)
show "rel_weakly_reflects_barbs SRel SWB"
using reflection rel_with_source_impl_SRel_weakly_reflects_barbs[where
Rel="indRelRSTPO SRel TRel" and SRel="SRel"]
by (simp add: indRelRSTPO.source[where SRel="SRel" and TRel="TRel"])
show "rel_weakly_reflects_barbs TRel TWB"
using reflection rel_with_target_impl_TRel_weakly_reflects_barbs[where
Rel="indRelRSTPO SRel TRel" and TRel="TRel"]
by (simp add: indRelRSTPO.target[where SRel="SRel" and TRel="TRel"])
qed
thus "rel_weakly_reflects_barbs SRel SWB" and "rel_weakly_reflects_barbs
TRel TWB"
by simp_all+
qed
```
-----
| [
"Wenda, Li",
"Haiming, Wang",
"Zhengying, Liu",
"Zhenguo, Li",
"Huajian, Xin",
"Jianqiao, Lu",
"Yinya, Huang",
"Zhicheng, Yang",
"Jing, Tang",
"Jian, Yin",
"Xiaodan, Liang"
] | 2024-05-23T00:00:00 | NeurIPS 2024 | true | 1 | 0 | [
"Isabelle"
] | http://arxiv.org/abs/2405.14414 | https://arxiv.org/abs/2405.14414 | https://www.semanticscholar.org/paper/29d2bf6d4b0ce5cdd2cf0ad3103597ba5681f29f |
PutnamBench: Evaluating Neural Theorem-Provers on the Putnam Mathematical Competition | We present PutnamBench, a new multilingual benchmark for evaluating the ability of neural theorem-provers to solve competition mathematics problems. PutnamBench consists of 1337 hand-constructed formalizations of 514 theorems sourced from the William Lowell Putnam Mathematical Competition, the premier undergraduate-level mathematics competition in North America. All the theorems have formalizations in Lean 4 and Isabelle; a substantial subset also has Coq formalizations. Proving the theorems requires significant problem-solving ability and proficiency in a broad range of topics taught in undergraduate mathematics courses. We use PutnamBench to evaluate several established neural and symbolic theorem-provers. These approaches can only solve a handful of the PutnamBench problems, establishing the benchmark as a difficult open challenge for research on neural theorem-proving. PutnamBench is available at https://github.com/trishullab/PUTNAM. | PutnamBench consists of 1697 hand-constructed formalizations of 640 theorems sourced from the William Lowell Putnam Mathematical Competition, the premier undergraduate-level mathematics competition in North America. | ## PUTNAMBENCH: Evaluating Neural Theorem-Provers on the Putnam Mathematical Competition
George Tsoukalas Jasper Lee John Jennings Jimmy Xin
UT Austin UT Austin UT Austin UT Austin
Michelle Ding Michael Jennings Amitayush Thakur Swarat Chaudhuri
UT Austin UT Austin UT Austin UT Austin
Abstract
We present PUTNAMBENCH, a new multilingual benchmark for evaluating the
ability of neural theorem-provers to solve competition mathematics problems.
PUTNAMBENCH consists of 1697 hand-constructed formalizations of 640 theorems sourced from the William Lowell Putnam Mathematical Competition, the
premier undergraduate-level mathematics competition in North America. All the
theorems have formalizations in Lean 4 and Isabelle; a substantial subset also has
Coq formalizations. Proving the theorems requires significant problem-solving
ability and proficiency in a broad range of topics taught in undergraduate mathematics courses. We use PUTNAMBENCH to evaluate several established neural
and symbolic theorem-provers. These approaches can only solve a handful of the
PUTNAMBENCH problems, establishing the benchmark as a difficult open challenge for research on neural theorem-proving. PUTNAMBENCH is available at
[https://github.com/trishullab/PutnamBench.](https://github.com/TrishulLab/PutnamBench)
1 Introduction
Automating mathematical reasoning is a longstanding goal in artificial intelligence (Newell et al.,
1957). A prominent line of work on the problem (Li et al., 2024) uses neural models to direct theorem-proving in formal frameworks like Lean 4 (Moura and Ullrich, 2021), Isabelle
(Wenzel et al., 2008), and Coq (Huet et al., 1997). These frameworks can “execute” proofs like
code and offer execution feedback, which simplifies the search for correct proofs.
The design of quality benchmarks is a key challenge in this research area. The two most prominent
benchmarks for neural theorem-proving are MINIF2F (Zheng et al., 2021) and FIMO (Liu et al.,
2023). The former formalizes a mix of problems from high-school level courses and mathematics
competitions such as AIME, AMC, and IMO; the latter consists of a collection of IMO problems.
Both benchmarks have limitations. For example, MINIF2F contains many problems that can be
immediately solved using an SMT solver, and FIMO only targets the Lean 3 framework, which is no
longer actively maintained.
More generally, as large language models (LLMs) grow in importance as a tool for neural theoremproving (Li et al., 2024), preventing leakage between pretraining sets and evaluation sets is more
important than ever. This makes the continued supply of new benchmarks an important goal.
In this paper, we respond to this challenge with PUTNAMBENCH, a new hand-curated, multilingual benchmark for neural theorem-provers. PUTNAMBENCH includes 1697 formalizations of 640
problems from the William Lowell Putnam Mathematical Competition, the premier college-level
Preprint. Under review.
-----
mathematics competition in the United States.[*] All our problems have Lean 4 (Moura and Ullrich,
2021) and Isabelle (Wenzel et al., 2008) formalizations; a substantial fraction have formalizations
in Coq (Huet et al., 1997) as well. The formalizations are all manually constructed and have been
carefully debugged. The benchmark also includes the original English-language problem statements
with permission from the Mathematical Association of America, which organizes the Putnam competition.
One key benefit of PUTNAMBENCH is that Putnam competition problems require a broad base of
mathematical knowledge and skills. Because they target undergraduate students, they cover topics
such as analysis and abstract algebra that do not appear in the International Mathematical Olympiad
(IMO). At the same time, success in the two competitions is correlated — top performers on the
Putnam competition are often former IMO medalists as well. Hence, PUTNAMBENCH is wellaligned with the IMO Grand Challenge (Challenge, 2019) and the AI Mathematical Olympiad (Prize,
2023), the latter of which offers a $10M prize fund for developing a system that can win a gold medal
at the IMO.
Another advantage is that PUTNAMBENCH is multilingual. Lean 4, Coq, and Isabelle are currently
the three most popular formal proof languages. However, theorem-proving benchmarks typically
only contain problems in a strict subset of these languages — for example, MINIF2F (Zheng et al.,
2021) does not include Coq problems, and FIMO (Liu et al., 2023) only targets Lean. PUTNAMBENCH is the first mathematics-competition benchmark to include problems in all three languages.
We use PUTNAMBENCH to evaluate several neural and symbolic approaches: Draft-Sketch-Prove
(Jiang et al., 2022b), COPRA (Thakur et al., 2024), GPT-4, Sledgehammer (Paulson and Blanchette,
2015), and Coqhammer (Czajka and Kaliszyk, 2018). Collectively, these methods can only solve a
handful of the PUTNAMBENCH problems, establishing PUTNAMBENCH as a hard open challenge
for the neural theorem-proving community.
2 Background
Formal Theorem-Proving. Formal proof
frameworks like Lean 4 (Moura and Ullrich,
2021), Coq (Huet et al., 1997), and Isabelle
(Wenzel et al., 2008) allow users to write
machine-verifiable proofs of mathematical
theorems. To create such a proof, one first
uses a framework-specific language to formally
state the target theorem. The mathematical
objects referenced in the theorem can be
imported from an existing repository or defined
by the user. During the proof process, the proof
framework maintains a state that includes
information about the parts of the proof that
remain to be completed. One can change this
state by executing a proof step. The user’s goal
is to write a sequence of proof steps (in the
framework’s language) that changes the proof
state to a special state “QED” in which there
are no unmet proof obligations.
Figure 1 illustrates a theorem and proof in the
Lean 4 framework.
theorem putnam_1988_b1 :
∀ a ≥ 2, ∀ b ≥ 2, ∃ x y z : Z,
x > 0 ∧ y > 0 ∧ z > 0 ∧
a * b = x * y + x * z + y * z + 1 := by
intro a ha b hb
use a - 1, b - 1, 1
constructor
linarith
constructor
linarith
constructor
linarith
ring
state by executing a proof step. The user’s goal
Figure 1: A formalization of Putnam 1988 B1 in
is to write a sequence of proof steps (in the
Lean 4, which asserts that for all integers a, b ≥ 2,
framework’s language) that changes the proof
there are positive integers x, y, z such that ab =
state to a special state “QED” in which there
xy + xz + yz + 1. The formal proof begins by
are no unmet proof obligations.
introducing all relevant variables and hypotheses
Figure 1 illustrates a theorem and proof in the with intro, then indicating the choice of x, y, z
Lean 4 framework. with use, and afterwards proving all goals using
the automated tactics linarith and ring. This
The Putnam Competition. The William proof was discovered through a few-shot invocaLowell Putnam Mathematical (Competition, tion of GPT-4.
2024), organized by the Mathematical Association of America (MAA), is the premier collegiate mathematics competition in North America.
Thousands of undergraduate students from universities across the United States and Canada take
the exam each year. The competition comprises two 3-hour-long sessions of six problems each,
[*PUTNAMBENCH is available at https://github.com/trishullab/PutnamBench.](https://github.com/TrishulLab/PutnamBench)
-----
Benchmark # Natural Language Lean Isabelle Coq Factored Solution
MINIF2F 488 ✓ ✓[†] ✓
PROOFNET 371 ✓ ✓[†] N/A
FIMO 149 ✓ ✓[†]
PUTNAMBENCH 640 ✓ ✓ ✓ ✓ ✓
Table 1: Comparison of existing formal theorem proving evaluation benchmarks. PUTNAMBENCH
exceeds prior benchmarks by providing support for all of Lean 4, Isabelle, and Coq, on a set of
difficult competition problems using undergraduate-level mathematics. For problems requiring a
numerical solution in addition to a proof, we factor the solution out of the theorem statement.
presented in approximately ascending order of difficulty within each session. While some problems
require competitors to furnish a concrete solution (such as a number, a set, or the truth value of a
given statement), all problems require a natural-language proof of correctness. The contest draws
from a wide variety of topics in the undergraduate curriculum, often using instances of ideas from
research-level mathematics.
3 PUTNAMBENCH
Category Total Quantity
Algebra 253
Analysis 226
Number Theory 107
Geometry 68
Linear Algebra 51
Abstract Algebra 28
Combinatorics 26
Probability 9
Set Theory 8
PUTNAMBENCH is a multilingual evaluation benchmark consisting of formalized problems from the Putnam competition.
PUTNAMBENCH is a manually produced benchmark, including 640 formalizations in Lean 4 and Isabelle, and 417 formalizations in Coq. In aggregate, PUTNAMBENCH contains
1697 formalizations of Putnam competition problems. We
also incorporate the informal statements and numerical solutions where applicable.
Now we elaborate on the main features of PUTNAMBENCH.
Diversity and Breadth. Compared to MINIF2F Table 2: Quantity by domain of PUT(Zheng et al., 2021) and FIMO (Liu et al., 2023), which NAMBENCH problems. Our formalgenerally rely on high-school mathematics, PUTNAMBENCH izations generally reflect the variety
incorporates a wider variety of problems which require defini- of Putnam problems, though we can
tions of the standard undergraduate mathematics curriculum. only formalize few geometry and
The PROOFNET benchmark (Azerbayev et al., 2023) also probability problems due to limited
sources problems from the undergraduate curriculum, but support for these topics in the rethese problems are generally from standard textbooks as spective mathematical libraries.
opposed to mathematical competitions. Putnam problems
often require definitions from multiple fields, which standard
textbooks do not necessarily target. Formalizations in PUTNAMBENCH include concepts from a
wide range of mathematical fields, including: (i) Analysis: Limits, integrals, derivatives, continuity;
(ii) Linear Algebra: Matrices, determinants, fields; (iii) Abstract Algebra: Rings, groups, magmas,
permutations; (iv) Algebra: Polynomials, inequalities, algebraic expressions; (v) Number Theory:
Primes, irrationality, base representations, divisors, palindromes; (vi) Geometry: Polygons, point
sets, line intersections, Euclidean distance; (vii) Set Theory & Combinatorics: Countability, power
sets, discrete structures, counting.
Multilinguality. PUTNAMBENCH contains formalizations of Putnam problems in Lean 4, Isabelle,
and Coq. The formalizations also include concepts defined in each proof assistant’s mathematical
repositories — notably, Mathlib, the HOL standard library, and Coquelicot (among various Coq
repositories). To the best of our knowledge, PUTNAMBENCH is the first undergraduate-level competition benchmark for each of these languages. Furthermore, we are the first to produce a human
mathematics competition-style evaluation benchmark for Coq.
We hope that this contribution can enable Coq practitioners access to the rapidly-growing field of
machine learning for mathematics.
-----
Generally, the formalizations of the problems are aligned in their structure, including hypothesis
naming and framing. Differences may arise according to the underlying foundations of each language. We also note that the pre-defined mathematical theory in each language differs, which can
sometimes lead to difficulties formalizing certain problems.
Compared to the prior benchmarks MINIF2F, FIMO, and PROOFNET, PUTNAMBENCH is the first
to support Lean 4 on initial release [†].
Factored Solutions. Roughly 60% of Putnam problems, in their natural language form, require
exhibiting a (closed-form) solution along with a proof of its correctness. Such problems do not assert
propositions, and hence are not immediately formalizable as they are not directly the statement of a
theorem. Prior benchmarks such as MINIF2F (Zheng et al., 2021) sidestep this issue by rewording
the problem statement to ask for a proof that the solution satisfies the constraints of the problem.
However, this reduction diminishes the overall difficulty of the problem, as producing a solution
can constitute the majority of the difficulty. To address this issue, we factor out solutions of such
problems from the formalized theorem statement. We include an example in Figure 2. In this way,
we provide two tasks for neural theorem proving:
- Task 1: Given the theorem statement, first identify the (closed-form) solution, and then provide
a proof of correctness by rewriting the solution into the theorem statement.
- Task 2: Given the theorem statement and solution, produce a proof of its correctness. This task
aligns with the current benchmarks.
We note that the process of producing the
numerical solution may be highly correlated with the proof of its correctness. In
this way, our formalizations can reflect
the true difficulty of the informal problem
statement.
Formalization effort and challenges. We
hand-crafted our benchmark over the
course of several months as a team of graduate and undergraduatestudents with experience in both university mathematics and
formal proof assistants. We found that the
average time-to-formalize a single problem in one language was roughly 25 minutes. Each formalization was verified by a
second person at least once, and we measured that the verification of a single formalization took between 10 minutes, on
average. We acknowledge that the timeto-formalize we report is higher than that
of MINIF2F; we believe this is largely due
to the increased complexity of the Putnam
problems, which oftentimes require definitions we must locate in each language’s respective mathematical libraries.
Putnam 2008 B5. Find all continuously differentiable functions f : R → R such that for every
rational number q, the number f (q) is rational and
has the same denominator as q.
abbrev solution : Set (R → R) :=
{fun (x : R) => x + n | n : Z} ∪
{fun (x : R) => -x + n | n : Z}
theorem putnam_2008_b5
(fqsat : (R → R) → Q → Prop :=
fun f q => ContDiff R 1 f ∧
(∃ p : Q, p = f q ∧ p.den = q.den))
(fsat : (R → R) → Prop :=
fun f => ∀ q : Q, fqsat f q)
: ∀ f : (R → R),
fsat f ↔ f ∈ solution := sorry
Figure 2: A formalization of Putnam 2008 B5 in Lean
of MINIF2F; we believe this is largely due
4. As the problem requires exhibiting the set of func
to the increased complexity of the Putnam
tions f satisfying the specified conditions, it is not di
problems, which oftentimes require defini
rectly the statement of a theorem. We formalize the
tions we must locate in each language’s re
problem by instantiating a variable “solution” outside
spective mathematical libraries.
of the theorem statement. In this way, a model can
We first produced formalizations in Lean either provide its own candidate, or use the correct so4, and then proceeded with our formaliza- lution we provide and attempt to produce a proof of
tion effort in Isabelle and then Coq. Due to correctness. Benchmarks such as MINIF2F and FIMO
differences in the underlying foundations only include formalizations with the solution written
of each language, we found that formaliza- into the theorem statement.
tions in one language sometimes do not directly transfer to another; for example, Isabelle does not have a subtyping mechanism, which we
†MINIF2F, FIMO, and PROOFNET were originally released using Lean 3, and MINIF2F and FIMO
have been updated to include Lean 4 formalizations following community efforts. (Azerbayev et al., 2023;
Vishwakarma et al., 2024). To the best of our knowledge, no open-sourced Lean 4 version of FIMO currently
exists.
-----
made extensive use of in Lean 4. Formalizations in Coq have an added difficulty: Coq lacks an
expansive unified library such as Mathlib and the HOL Library, which we make extensive use of
in Lean 4 and Isabelle respectively. Our Coq formalizations rely on eight mathematics repositories: Stdlib, Stdpp, MathComp, MathComp-Analysis, Coquelicot, GeoCoq, and Coqtail (Mathcomp,
2015; mathcomp-analysis; Coquelicot, 2015; GeoCoq, 2015; Coqtail, 2017).
Some problems are not naturally amenable to formalization — for example, we found that while
formalizing problems involving probabilities is possible, such formalizations often require heavy
probability theory.
Similarly, support for problems involving Euclidean geometry varies
across languages; in particular, Lean
4 does not yet have a sufficiently extensive library to make most geometry problems formalizable. By contrast, Coq has an extensive geometry
repository called GeoCoq, which we
utilize for our Coq formalizations.
Dataset Contamination. Our benchmark is unique compared to informal benchmarks such as MATH
(Hendrycks et al., 2021) and GSM8K
(Cobbe et al., 2021) in the sense that
the target output has never been produced, hence avoiding direct contamination. To the best of our knowledge,
we are the first to provide formalizations of a large collection of Putnam
problems in any of Lean, Isabelle,
and Coq. Since writing a formal
proof requires the formal theorem
statement, it is highly unlikely any
possible formal proof has been written for any of our problems. We performed a thorough investigation of
formal mathematics repositories for
each language for confirmation, finding no aligned theorems and proofs
from the Putnam Competition. We do
not include any of the formal proofs
in our benchmark.
(a) theorem putnam_2006_b2
(n : N)
(npos : n > 0)
(X : Finset R)
(hXcard : X.card = n)
: (∃ S ⊆ X, S ̸= ∅∧ ∃ m : Z,
|m + Σ s in S, s| ≤ 1 / (n + 1))
(b) theorem putnam_2006_b2:
fixes n :: nat
and X :: "real set"
assumes npos: "n > 0"
and hXcard: "finite X ∧ card X = n"
shows "∃ S ⊆ X. (S ̸= {}) ∧ (∃ m :: int.
¦m + (Σ s ∈ S. s)¦ ≤ 1 / (n + 1))"
(c) Theorem putnam_2006_b2
(n : nat)
(npos : gt n 0)
(X : list R)
(hXcard : length X = n)
: exists (presS: R -> Prop) (m: Z) (S: list
R),
(neq (length S) 0) /\ (forall (x: R),
In x S <-> (In x X /\ presS x))
/\ (Rabs (IZR m + (fold_left Rplus S 0))
<= 1 / INR (n + 1)).
Figure 3: Formalizations of Putnam 2006 B2 in (a) Lean 4,
Furthermore, any proofs found by au- (b) Isabelle, (c) Coq. Putnam 2006 B2 asserts that given a
tomated methods in our evaluations finite subset X ⊆ R with |X| = n > 0, there is a nonempty
are not included and are only men- subset S ⊆ X and an m ∈ Z such that |m + s S [s][| ≤]
tioned in this article. Indirect con- 1 ∈
n+1 [.]
tamination can occur through transfer [P]
from training on the informal proofs,
though producing proofs in formal proof environments still presents a major difficulty for all current
neural methods, as we find in Section 4.
Licensing and Rules of Engagement. PUTNAMBENCH is available under an Apache 2.0 license
for Lean 4 and Isabelle, and under an MIT license for Coq. We align the licenses with those of the
repositories we use for each language. With permission from the MAA, we include the informal
statements as sourced from the competition (Alexanderson et al., 1985; Kedlaya et al., 2002, 2020).
[We host a public leaderboard at https://trishullab.github.io/PutnamBench/ and will readily accept](https://trishullab.github.io/PutnamBench/)
evaluation results from future works.
-----
PUTNAMBENCH: Lean
Method Success Rate
GPT-4 1/640
COPRA 1/640
ReProver (+r) 0/640
ReProver (−r) 0/640
PUTNAMBENCH: Coq
Method Success Rate
GPT-4 1/417
COPRA 1/417
Tactician 0/417
CoqHammer 0/417
PUTNAMBENCH: Isabelle
Method Success Rate
GPT-4 1/640
DSP 4/640
Sledgehammer 3/640
Table 3: Results of evaluations on PUTNAMBENCH in each language. We find that all tested methodologies perform poorly, solving at most a handful of problems. Notably, the only problem solved in
both Lean and Coq is Putnam 1988 B1, which is not solved by any method in Isabelle. ReProver,
our finetuned baseline for Lean, is unable to solve any problems with or without retrieval. Symbolic
automation proves to be powerful in Isabelle, with Sledgehammer solving the most problems than
GPT4 alone. DSP generates four successful proofs, two of which cannot be generated by Sledgehammer alone.
4 Experimental Evaluation
To understand the challenges that PUTNAMBENCH poses for state-of-the-art theorem-proving approaches, we attempt to solve its problems using a suite of such approaches. Given the relative
lack of tailored systems for multilingual theorem-proving, we run evaluations for each language
separately. Any method that is evaluated on multiple languages is based on off-the-shelf foundation
models.
Metrics. Our evaluation is based on the pass@n (Lample et al., 2022) metric. This metric measures
a prover’s ability to produce a successful proof, as determined by the formal proof environment,
given a budget of n proof attempts. In search-based methods (Thakur et al., 2024), each proof
attempt involves a distinct search that can query a neural model multiple times.
Models. For each of the languages, we perform evaluations using GPT-4 (OpenAI, 2023) [‡], a
highly capable foundation model. We run evaluations using in-context learning, appending several
examples of successful proofs of simple theorems in each language. For evaluations with Lean 4
approaches, we note that many approaches have targeted Lean 3, which is not backward-compatible
and no longer actively maintained. We evaluate COPRA (Thakur et al., 2024) on PUTNAMBENCH,
modifying the prompt examples of COPRA to enable search in Lean 4. Furthermore, we run evaluations LeanDojo’s retrieval-augmented prover REPROVER, a finetuned model designed to utilize
incorporate retrieved lemmas as part of the proof search. We also include evaluate with the retrieval
component held out.
For our Isabelle experiments, we run evaluations of Draft, Sketch, and Prove (DSP) (Jiang et al.,
2022b) using GPT-4 as the underlying foundation model, noting that many further works for
theorem-proving in Isabelle have extended on the DSP pipeline as we mention in Section 5. We
also run evaluations using stand-alone invocations to Sledgehammer, a powerful symbolic automation tool in Isabelle that relies on calls to external SMT solvers.
As for our Coq experiments, prior neural approaches for Coq have mostly targeted software verification tasks, as opposed to competition mathematics. As a result, our Coq experiments use COPRA,
which also supports theorem-proving in Coq. We evaluate using the Tactician (Blaauwbroek et al.,
2020) platform with the locality sensitive hashing model configuration. We also run evaluations
using CoqHammer (Czajka and Kaliszyk, 2018), a tool similar to Isabelle’s Sledgehammer, which
makes calls to external constraint solvers.
4.1 Results
Lean 4. We prompt GPT-4 in a pass@10, setting temperature T = 0.7 and using several examples
of simple theorems and proofs, to generate a proof for each problem. The result of this experiment
yields a single successful proof across all 640 Lean formalizations. The problem (Putnam 1988 B1)
and the generated proof are given in Figure 1. In particular, Putnam 1988 B1 is solved on the first of
10 attempts. An example of a failure mode of GPT-4 is given in Figure 18.
‡We use GPT-4o for all our evaluations
-----
We also run evaluations with COPRA, using their default hyperparameters for search, performing
a pass@1, and allowing 60 queries to GPT-4. However, since COPRA was originally designed for
interaction with Lean 3, we make a small modification to its system prompt to enable search in Lean
4. The result of the step-wise proof search over all Lean 4 formalizations yields a correct proof
to one problem (1988 B1). We find that backtracking in the search was not required for this proof,
which was 10 lines long and was found at the 10th query. It is possible that affording COPRA further
queries to GPT-4 can yield more successful proofs, though it is not yet feasible to perform such an
experiment due to the cost of queries to GPT-4.
We found that, by default, GPT-4 produces proofs using Lean 3 syntax, which is not compatible with
Lean 4. Even when directed to produce outputs in Lean 4, GPT-4 typically continues to produce
outputs in Lean 3. Our prompt, which we include in Figure 16, elucidates some design differences
in Lean 4 to better enforce compliance with the Lean 4 syntax. However, we noticed many examples
where GPT-4 continues to output terms in Lean 3 syntax. One such example is given in Figure 17.
We run REPROVER using the standard search parameters used in LeanDojo (Yang et al., 2023). Our
evaluation yields no successfully proven problems, with and without the inclusion of the retrieval
module. We believe that Putnam 1988 B1, which the other methods solve, is not solved by REPROVER as it requires an understanding that the choice of x, y, z = 1, a − 1, b − 1 will eventually
satisfy the conditions of the goal after simplification. Smaller models, like the one driving REPROVER’s search, may not be as readily capable of such understanding.
Isabelle. We run GPT-4 using the same configuration, with modified prompts for Isabelle, on our
Isabelle formalizations. We find that GPT-4 can produce a single successful proof to Putnam 1986
B1, a geometric problem stated algebraically. We include the statement and its proof as generated
by GPT-4 in Figure 19.
DSP represents a neurosymbolic
methodology which has seen significant application for theorem-proving
in MINIF2F. We run DSP with
pass@10, using temperature T =
0.1 and GPT-4 as the underlying language model. Our evaluation yields
four successful proofs: of Putnam
2001 A1 and 1971 B1, two problems
involving magmas (sets with a binary
operation), one of Putnam 1995 A1,
a problem involving a closed-undermultiplication subset of the reals, and
Putnam 1986 B1. In particular, Putnam 1995 A1 and 1986 B1 cannot be
solved by Sledgehammer alone. The
generated proof of Putnam 1995 A1
is included in Figure 4.
Putnam 2001 A1. Consider a set S and a binary operation ⋆, i.e., for each a, b ∈ S, a ⋆b ∈ S. Assume
(a⋆b) ⋆a = b for all a, b ∈ S. Prove that a⋆ (b⋆a) = b
for all a, b ∈ S.
theorem putnam_2001_a1:
fixes op :: "'a ⇒ 'a ⇒ 'a"
assumes hop : "∀a b :: 'a.
op (op a b) a = b"
shows "∀a b :: 'a. op a (op b a) = b"
proof {
fix a b :: 'a
have "op (op a (op b a)) a = op b a" using
hop by simp
then have "op a (op b a) = b" using hop by
generated proof of Putnam 1995 A1 metis
is included in Figure 4. }
then show ?thesis by simp
We run a baseline using Sledgeham
qed
mer, a powerful automation tool in
Isabelle which makes calls to external SMT solvers to prove a given
goal. With a set timeout of t = 120
Figure 4: A formalization of Putnam 2001 A1 in Isabelle
seconds, we run Sledgehammer on
and the corresponding proof discovered by our evaluation
each Isabelle formalization. The re
with DSP. Sledgehammer alone can also produce a success
sult of this evaluation is 3 success
ful proof to this theorem.
fully proven problems: Putnam 1971
B1, 2001 A1, and 2012 A2. Notably, all of these problems are statements about sets with binary
operations. We include the statements of 1971 B1 and 2012 A2 in Figure 22.
Coq. We run GPT-4 with a Coq-based prompt on our Coq formalizations using the same configuration as in Lean and Isabelle. The result of the experiment is 1 solved problem, namely Putnam
-----
1988 B1, which was also solved in Lean 4. The proof, which we include in Figure 14, generally
follows the same structure as the proof in Lean.
An evaluation with COPRA, in a pass@1-with-60-queries and T = 0.0 also yields a successful
proof only for Putnam 1988 B1 which we include in Figure 14. In this case, backtracking was
crucial for proof search on this problem. The crucial step in 1988 B1 is the choice of x, y, z once a
and b have been introduced. Initially, COPRA predicts the erroneous choice x, y, z = 1, 1, ab − 1
and eventually reverts this choice using backtracking. Afterwards, COPRA predicts a correct choice
x, y, z = 1, a − 1, b − 1 and proceeds with the proof.
We run Tactician using the locality sensitive hashing model with a timeout of t = 600s per problem.
Our evaluation yields no successfully proven problems. While showing favorable performance on
theorems drawn from Coq’s standard library (Zhang et al., 2021), such methodologies do not as of
yet scale to challenging olympiad-style problems.
We run CoqHammer with 8 parallel threads using an ATP timeout of 100 seconds, proof reconstruction timeout of 15 seconds, and sauto timeout of 5 seconds, for a total of 120 seconds allocated for
each formalization. The evaluation yields no successful proofs — indicating that symbolic tools in
Coq are not yet capable of handling PUTNAMBENCH problems. It is not surprising that CoqHammer does not match the performance of Sledgehammer even though they rely on the same external
solvers. The underlying logical system of Coq is more complex than that of Isabelle and is hence
less amenable to automation.
4.2 General Analysis
Aggregating over all experiments performed in all languages, we find that a total of 6 problems in
PUTNAMBENCH are successfully proven. A majority of these come from evaluations in Isabelle,
particularly with strong contributions from Sledgehammer. Sledgehammer can solve all three problems involving magmas which appear in our benchmark but fails to produce successful proofs for
any other formalization. DSP solves an additional two problems and relies heavily on Sledgehammer to fill in the proofs of intermediate steps. The single problem solved in Lean and Coq also
makes use of automated tactics like linarith and lia, and requires only a single crucial step.
Hence, we find that a few PUTNAMBENCH problems are not entirely intractable using current methods. However, anecdotally, these problems are among the easiest ever included in the Putnam competition. All admit a very short natural language proof and do not require reasoning about particularly
complicated objects. We believe that significant advancements in automated mathematical reasoning
are required to make progress on PUTNAMBENCH.
5 Related Work
Formal Benchmarks. Several evaluation benchmarks for formal mathematics have been developed
in recent years. MINIF2F (Zheng et al., 2021) is a formal-to-formal benchmark of competition
problems, sourced from high school competitions such as the AMC, AIME, and IMO. MINIF2F
is a multilingual benchmark, comprising of 488 problems each formalized in Lean 3, Metamath,
Isabelle and HOL Light. We chose not to include formalizations in Metamath and HOL Light as
they have not been the focus of attention for neural theorem-proving. A similar competition-style
benchmark is FIMO (Liu et al., 2023), which contains 149 Lean 3 formalizations of IMO shortlist
problems produced using a back-translation procedure with GPT-4. The automatically-generated
formalizations are then manually verified. Both benchmarks are designed to measure certifying the
solution to the informal problem statement when one exists. Compfiles (2024) is a collection of
171 Lean 4 formalizations of competition problems, predominantly from the IMO and USAMO,
often accompanied by a formal proof, which has not seen use in benchmarking automated theoremprovers. ProofNet (Azerbayev et al., 2023) introduced a benchmark of 371 exercises, formalized
in Lean 3, from standard textbooks in the undergraduate mathematics curriculum. While largely
not competition-based, problems in ProofNet draw from a broader library of concepts than miniF2F
and FIMO, which rely only on high-school mathematics. LeanDojo (Yang et al., 2023) introduces a
dataset of formal mathematics and proofs derived from Lean’s mathlib library (mathlib Community,
2020), and trains a retrieval-augmented model towards generating proofs on their held-out test set.
ProverBot9001 (Sanchez-Stern et al., 2020) introduced a dataset for theorems and proofs written in
-----
Coq derived from CompCert (Leroy, 2009), a formally verified C compiler. PISA (Jiang et al., 2021)
is a dataset derived from Isabelle’s Archive of Formal Proofs (AFP), which contains theorems and
proofs from general mathematics as opposed to specifically competition problems.
Informal Benchmarks. There are also several popular benchmarks for informal (natural-language)
mathematical reasoning. MATH (Hendrycks et al., 2021) is a collection of 12,500 mathematics
problems, in natural language only, sourced from various high school competitions additionally supplied with step-by-step informal proofs. GSM8K (Cobbe et al., 2021) is a collection of 8,500 grade
school mathematics problems, intended to benchmark natural language reasoning for mathematicsstyle problems. While benefiting from the abundance of natural language data, these benchmarks
fall short, since in natural language, there is no automatic mechanism for certifiable verification of
the reasoning path which yielded the numerical answer. For this reason, metrics for success on these
benchmarks usually rely on exact-answer match, because verifying reasoning paths is imprecise and
is best done by human experts. By contrast, theorem proving in formal proof assistants comes with
a high-confidence signal for correctness of the reasoning path, or proof, of a theorem.
Methods for Formal Theorem-Proving. Significant effort has been spent on developing automatic
theorem-provers for formal mathematics (Li et al., 2024). Most recent efforts train a neural module
to perform proof-step prediction, which is then wrapped in a search mechanism to locate a valid
proof. GPT-f (Polu and Sutskever, 2020) trains a transformer-based architecture on data derived
from the Metamath library (Megill and Wheeler, 2019) for proof synthesis. PACT expands on GPTf by incorporating auxiliary training tasks for the neural module towards theorem-proving in Lean
3. FMSCL (Polu et al., 2022) alternates proof-search and training to finetune their neural model
based on proofs found during search. HTPS (Lample et al., 2022) uses a transformer-based neural
module in an online, MCTS-inspired proof search in Lean 3 and Metamath. COPRA (Thakur et al.,
2024) uses GPT-4 supplied with error feedback from the environment and lemmas from a retrieval
mechanism for an agentic proof-search in Lean 3 and Coq. LLEMMA (Azerbayev et al., 2024)
continues pretraining of Code Llama on a mathematics-based corpus dubbed Proof-Pile-2, and uses
their learned model for formal proof search in Lean 4. DeepSeek-Prover Xin et al. (2024) produces
synthetic Lean data en-masse for training their prover model. AlphaGeometry (Trinh et al., 2024)
targets IMO problems in a geometry-specific proof assistant language using an interleaving search,
where a neural module synthesizes auxiliary constructions and a symbolic engine produces deductive closures.
The Isabelle proof assistant (Paulson, 1994), given its declarative nature and powerful symbolic
automation, has too been the focus of much attention for neural theorem proving. Isabelle features
Sledgehammer (Paulson and Blanchette, 2015), an automated reasoning tool which calls external
automated theorem provers (ATPs) for proof synthesis. Draft, Sketch, Prove (DSP) (Jiang et al.,
2022b) uses a high-caliber LLM to generate natural language proofs and converts them into formal
sketches in Isabelle, whose gaps are then filled using Sledgehammer. Zhao et al. (2023) employed a
diffusion model to predict an optimal ordering of the few-shot examples provided to the LLM in the
DSP pipeline. Lyra (Zheng et al., 2023) utilized error-feedback from Isabelle’s execution to modify
holes in the sketch which were too difficult for the symbolic prover. POETRY (Wang et al., 2024)
leverages recursion for theorem-proving and trains a neural module to produce proof sketches, as
opposed to using in-context learning with an LLM. LEGO-Prover (Wang et al., 2023) extends the
pipeline by incorporating a skill library which grows throughout the proof search task. Separate from
approaches utilizing natural language proofs, Thor (Jiang et al., 2022a) trains a transformer-based
architecture to predict successful invocations of Sledgehammer, along with the usual proof-step
objective. Baldur (First et al., 2023) explored repairing erroneous proofs in Isabelle through the use
of LLMs.
The Coq interactive theorem prover has seen use in both software verification and general mathematics. Famously, mechanized proofs of the Four Colour Theorem (Robertson et al., 1997) and
the Feit-Thompson theorem (Gonthier et al., 2013a) were produced in Coq. Similarly, numerous
software verification projects have been undertaken in Coq, such as CompCert (a formally verified C compiler) and Verdi (Gonthier et al., 2013b) (a framework for verifying distributed systems
protocols). ASTactic (Yang and Deng, 2019) trained a neural module involving recurrent networks
and attention on data collected from various Coq repositories. Proverbot9001 (Sanchez-Stern et al.,
2020) targeted proof synthesis on a set of held-out theorems from the CompCert project. COPRA
(Thakur et al., 2024) also evaluates on this CompCert-based task using their multilingual approach.
Tactician (Blaauwbroek et al., 2020) develops a platform for proof automation for the Coq practi
-----
tioner, with support for experimenting with new machine learning techniques for tactic prediction
and proof search. Zhang et al. (2021) explores several online learning techniques inside Tactician,
including an approximate k-nearest neighbors method via locality sensitive hashing which we use
for our evaluation. Graph2Tac (Blaauwbroek et al., 2024) uses graph neural networks for learning
online hierarchical representations of new theorems and definitions, and is used for proof search
within Tactician.
6 Conclusion
We presented PUTNAMBENCH, a benchmark for neural theorem-proving consisting of formalizations of Putnam competition problems. A distinctive feature of PUTNAMBENCH is that it spans a
broad range of undergraduate-level mathematical topics, including algebra, analysis, and number
theory. Another unique benefit is that it includes problems in Lean 4, Isabelle, and Coq, the three
most popular formal proof frameworks.
As our experiments show, PUTNAMBENCH is a challenging benchmark: all current theorem-proving
approaches fail to solve more than a handful of its problems. We believe that these failures include
two root causes: (i) While current theorem-provers can effectively stitch together standard proof
steps well-represented in the training corpus, they often fail at synthesizing new lemmas and orchestrating these lemmas into intricate proofs. (ii) Current methods often fail to leverage the deep
knowledge available in mathematics repositories. Developing a new generation of neural theoremprovers in which these weaknesses are at least partly addressed is an exciting direction of future
research.
Acknowledgements. This work was supported by NSF awards CCF-2212559 and CCF-2403211,
the NSF Institute for Foundations of Machine Learning, and a gift by the Aziz Family Foundation.
We thank Lasse Blaauwbroek, Jason Rute, and Kaiyu Yang for useful discussions and support with
setting up experiments.
References
[AFP. Archive of Formal Proofs — isa-afp.org. https://www.isa-afp.org/, 2004. [Accessed](https://www.isa-afp.org/)
25-05-2024].
G.L. Alexanderson, L.F. Klosinski, and L.C. Larson. The William Lowell Putnam Mathematical Competition: Problems and Solutions, 1965-1984. MAA problem books series. Mathematical Association of America, 1985. ISBN 9780883854419. URL
[https://books.google.com/books?id=mv0oAQAAMAAJ.](https://books.google.com/books?id=mv0oAQAAMAAJ)
Zhangir Azerbayev, Bartosz Piotrowski, Hailey Schoelkopf, Edward W. Ayers, Dragomir Radev,
and Jeremy Avigad. Proofnet: Autoformalizing and formally proving undergraduate-level mathematics, 2023.
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An Open Language Model
For Mathematics, 2024.
Lasse Blaauwbroek, Josef Urban, and Herman Geuvers. The Tactician: A Seamless, Interactive Tactic Learner and Prover for Coq, page 271–277. Springer International Publishing, 2020. ISBN 9783030535186. doi: 10.1007/978-3-030-53518-6_17. URL
[http://dx.doi.org/10.1007/978-3-030-53518-6_17.](http://dx.doi.org/10.1007/978-3-030-53518-6_17)
Lasse Blaauwbroek, Miroslav Olšák, Jason Rute, Fidel Ivan Schaposnik Massolo, Jelle Piepenbrock,
and Vasily Pestun. Graph2tac: Online representation learning of formal math concepts, 2024.
[URL https://arxiv.org/abs/2401.02949.](https://arxiv.org/abs/2401.02949)
IMO Grand Challenge. IMO Grand Challenge — imo-grand-challenge.github.io.
[https://imo-grand-challenge.github.io/, 2019. [Accessed 01-06-2024].](https://imo-grand-challenge.github.io/)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training verifiers to solve math word problems, 2021.
-----
William-Lowell Putnam Mathematical Competition. William Lowell Putnam Mathematical Compe[tition | Mathematical Association of America — maa.org. https://maa.org/putnam-2/, 2024.](https://maa.org/putnam-2/)
[Accessed 08-07-2024].
Compfiles. GitHub - dwrensha/compfiles: Catalog Of Math Problems Formalized In Lean —
[github.com. https://github.com/dwrensha/compfiles, 2024. [Accessed 25-05-2024].](https://github.com/dwrensha/compfiles)
Coqtail. GitHub - whonore/Coqtail: Interactive Coq Proofs in Vim — github.com.
[https://github.com/whonore/Coqtail, 2017. [Accessed 01-06-2024].](https://github.com/whonore/Coqtail)
[Coquelicot. GitHub - thery/coquelicot — github.com. https://github.com/thery/coquelicot,](https://github.com/thery/coquelicot)
2015. [Accessed 01-06-2024].
Łukasz Czajka and Cezary Kaliszyk. Hammer for coq: Automation for dependent type theory.
Journal of automated reasoning, 61:423–453, 2018.
Emily First, Markus N Rabe, Talia Ringer, and Yuriy Brun. Baldur: whole-proof generation and
repair with large language models. arXiv preprint arXiv:2303.04910, 2023.
GeoCoq. GitHub - GeoCoq/GeoCoq: A formalization of geometry in Coq based on Tarski’s axiom
[system — github.com. https://github.com/GeoCoq/GeoCoq, 2015. [Accessed 01-06-2024].](https://github.com/GeoCoq/GeoCoq)
Georges Gonthier, Andrea Asperti, Jeremy Avigad, Yves Bertot, Cyril Cohen, François Garillot,
Stéphane Le Roux, Assia Mahboubi, Russell O’Connor, Sidi Ould Biha, Ioana Pasca, Laurence
Rideau, Alexey Solovyev, Enrico Tassi, and Laurent Théry. A machine-checked proof of the odd
order theorem. In Sandrine Blazy, Christine Paulin-Mohring, and David Pichardie, editors, Interactive Theorem Proving, pages 163–179, Berlin, Heidelberg, 2013a. Springer Berlin Heidelberg.
ISBN 978-3-642-39634-2.
Georges Gonthier, Andrea Asperti, Jeremy Avigad, Yves Bertot, Cyril Cohen, François Garillot,
Stéphane Le Roux, Assia Mahboubi, Russell O’Connor, Sidi Ould Biha, Ioana Pasca, Laurence
Rideau, Alexey Solovyev, Enrico Tassi, and Laurent Théry. A machine-checked proof of the odd
order theorem. In Sandrine Blazy, Christine Paulin-Mohring, and David Pichardie, editors, Interactive Theorem Proving, pages 163–179, Berlin, Heidelberg, 2013b. Springer Berlin Heidelberg.
ISBN 978-3-642-39634-2.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset, 2021.
Gérard Huet, Gilles Kahn, and Christine Paulin-Mohring. The coq proof assistant a tutorial. Rapport
Technique, 178, 1997.
Albert Q. Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzygó´zd´z, Piotr
Miło´s, Yuhuai Wu, and Mateja Jamnik. Thor: Wielding hammers to integrate language models
and automated theorem provers, 2022a.
Albert Q Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée
Lacroix, Yuhuai Wu, and Guillaume Lample. Draft, sketch, and prove: Guiding formal theorem
provers with informal proofs. arXiv preprint arXiv:2210.12283, 2022b.
Albert Qiaochu Jiang, Wenda Li, Jesse Michael Han, and Yuhuai Wu. Lisa: Language models of
isabelle proofs, 2021.
K.S. Kedlaya, B. Poonen, R. Vakil, and Mathematical Association of America. The William Lowell
Putnam Mathematical Competition 1985-2000: Problems, Solutions and Commentary. MAA
Problem Book Series. Mathematical Association of America, 2002. ISBN 9780883858073. URL
[https://books.google.com/books?id=AA-lOA1nPDcC.](https://books.google.com/books?id=AA-lOA1nPDcC)
K.S. Kedlaya, D.M. Kane, J.M. Kane, and E.M. O’Dorney. The William Lowell Putnam Mathematical Competition 2001–2016: Problems, Solutions, and Commentary. Problem Books. American Mathematical Society, 2020. ISBN 9781470454272. URL
[https://books.google.com/books?id=QwGWzQEACAAJ.](https://books.google.com/books?id=QwGWzQEACAAJ)
-----
Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury Hayat,
Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neural theorem
proving. Advances in Neural Information Processing Systems, 35:26337–26349, 2022.
Xavier Leroy. Formal verification of a realistic compiler. Commun. ACM, 52(7):
107–115, jul 2009. ISSN 0001-0782. doi: 10.1145/1538788.1538814. URL
[https://doi.org/10.1145/1538788.1538814.](https://doi.org/10.1145/1538788.1538814)
Zhaoyu Li, Jialiang Sun, Logan Murphy, Qidong Su, Zenan Li, Xian Zhang, Kaiyu Yang, and Xujie
Si. A survey on deep learning for theorem proving, 2024.
Chengwu Liu, Jianhao Shen, Huajian Xin, Zhengying Liu, Ye Yuan, Haiming Wang, Wei Ju,
Chuanyang Zheng, Yichun Yin, Lin Li, Ming Zhang, and Qun Liu. Fimo: A challenge formal
dataset for automated theorem proving, 2023.
Mathcomp. GitHub - math-comp/math-comp: Mathematical Components — github.com.
[https://github.com/math-comp/math-comp, 2015. [Accessed 01-06-2024].](https://github.com/math-comp/math-comp)
mathcomp-analysis. GitHub - math-comp/analysis: Mathematical Components compliant Analysis
[Library — github.com. https://github.com/math-comp/analysis, 2017. [Accessed 05-06-](https://github.com/math-comp/analysis)
2024].
The mathlib Community. The lean mathematical library. In Proceedings of the 9th ACM SIGPLAN
International Conference on Certified Programs and Proofs, POPL ’20. ACM, January 2020. doi:
[10.1145/3372885.3373824. URL http://dx.doi.org/10.1145/3372885.3373824.](http://dx.doi.org/10.1145/3372885.3373824)
Norman D. Megill and David A. Wheeler. Metamath: A Computer Language for
Pure Mathematics, 2019. [URL http://us.metamath.org/downloads/metamath.pdf.](http://us.metamath.org/downloads/metamath.pdf)
http://us.metamath.org/downloads/metamath.pdf.
Leonardo de Moura and Sebastian Ullrich. The lean 4 theorem prover and programming language.
In Automated Deduction–CADE 28: 28th International Conference on Automated Deduction,
Virtual Event, July 12–15, 2021, Proceedings 28, pages 625–635. Springer, 2021.
Allen Newell, John Clifford Shaw, and Herbert A Simon. Empirical explorations of the logic theory
machine: a case study in heuristic. In Papers presented at the February 26-28, 1957, western
joint computer conference: Techniques for reliability, pages 218–230, 1957.
OpenAI. Gpt-4 technical report, 2023.
Lawrence Paulson and Jasmin Blanchette. Three years of experience with sledgehammer, a practical
link between automatic and interactive theorem provers, 02 2015.
Lawrence C Paulson. Isabelle: A generic theorem prover. Springer, 1994.
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
arXiv preprint arXiv:2009.03393, 2020.
Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya
Sutskever. Formal mathematics statement curriculum learning, 2022.
[Prize. AIMO Prize — aimoprize.com. https://aimoprize.com/, 2023. [Accessed 01-06-2024].](https://aimoprize.com/)
Neil Robertson, Daniel Sanders, Paul Seymour, and Robin Thomas. The fourcolour theorem. Journal of Combinatorial Theory, Series B, 70(1):2–44,
1997. ISSN 0095-8956. doi: https://doi.org/10.1006/jctb.1997.1750. URL
[https://www.sciencedirect.com/science/article/pii/S0095895697917500.](https://www.sciencedirect.com/science/article/pii/S0095895697917500)
Alex Sanchez-Stern, Yousef Alhessi, Lawrence Saul, and Sorin Lerner. Generating correctness
proofs with neural networks. In Proceedings of the 4th ACM SIGPLAN International Workshop
on Machine Learning and Programming Languages, pages 1–10, 2020.
Amitayush Thakur, George Tsoukalas, Yeming Wen, Jimmy Xin, and Swarat Chaudhuri. An InContext Learning Agent for Formal Theorem-Proving, 2024.
-----
Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry
without human demonstrations. Nature, 625(7995):476–482, 2024.
Rahul Vishwakarma, Pietro Monticone, and Abhijit Niser. GitHub -
rahul3613/ProofNet-lean4: ProofNet dataset ported into Lean 4 — github.com.
[https://github.com/rahul3613/ProofNet-lean4, 2024. [Accessed 01-06-2024].](https://github.com/rahul3613/ProofNet-lean4)
Haiming Wang, Huajian Xin, Chuanyang Zheng, Lin Li, Zhengying Liu, Qingxing Cao, Yinya
Huang, Jing Xiong, Han Shi, Enze Xie, Jian Yin, Zhenguo Li, Heng Liao, and Xiaodan Liang.
Lego-prover: Neural theorem proving with growing libraries, 2023.
Haiming Wang, Huajian Xin, Zhengying Liu, Wenda Li, Yinya Huang, Jianqiao Lu, Zhicheng Yang,
Jing Tang, Jian Yin, Zhenguo Li, and Xiaodan Liang. Proving theorems recursively, 2024.
Makarius Wenzel, Lawrence C Paulson, and Tobias Nipkow. The isabelle framework. In Theorem Proving in Higher Order Logics: 21st International Conference, TPHOLs 2008, Montreal,
Canada, August 18-21, 2008. Proceedings 21, pages 33–38. Springer, 2008.
Huajian Xin, Daya Guo, Zhihong Shao, Zhizhou Ren, Qihao Zhu, Bo Liu, Chong Ruan, Wenda Li,
and Xiaodan Liang. Deepseek-prover: Advancing theorem proving in llms through large-scale
synthetic data, 2024.
Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In
International Conference on Machine Learning, pages 6984–6994. PMLR, 2019.
Kaiyu Yang, Aidan M Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil,
Ryan Prenger, and Anima Anandkumar. Leandojo: Theorem proving with retrieval-augmented
language models. arXiv preprint arXiv:2306.15626, 2023.
Liao Zhang, Lasse Blaauwbroek, Bartosz Piotrowski, Prokop Cerný, Cezary Kaliszyk, and[ˇ]
Josef Urban. Online machine learning techniques for coq: A comparison, 2021. URL
[https://arxiv.org/abs/2104.05207.](https://arxiv.org/abs/2104.05207)
Xueliang Zhao, Wenda Li, and Lingpeng Kong. Decomposing the enigma: Subgoal-based demonstration learning for formal theorem proving, 2023.
Chuanyang Zheng, Haiming Wang, Enze Xie, Zhengying Liu, Jiankai Sun, Huajian Xin, Jianhao
Shen, Zhenguo Li, and Yu Li. Lyra: Orchestrating dual correction in automated theorem proving,
2023.
Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark for
formal olympiad-level mathematics. arXiv preprint arXiv:2109.00110, 2021.
-----
Section putnam_2009_b1.
Require Import List QArith Znumtheory Reals.
Open Scope Q.
Theorem putnam_2009_b1:
let fix factl (l : list nat) : list nat :=
match l with
| nil => nil
| h :: t => fact h :: t end in
forall (q: Q), q > 0 ->
exists (n d: list nat), (forall x, (In x n \/ In x d)-> prime (Z.of_nat x)) /\
inject_Z (Z.of_nat (fold_left Nat.mul (factl n) 1%nat)) / inject_Z (Z.of_nat (
fold_left Nat.mul (factl d) 1%nat)) = q.
Proof. Admitted.
End putnam_2009_b1.
Figure 5: A formalization of Putnam 2009 B1 in Coq. The conversion operators present cast between
the rationals, integers, reals, and natural numbers.
A Appendix
A.1 Formalization difficulties in Coq
In the Coq Standard Library, operations and definitions for numbers are split across modules. The
classical reals are defined in Coq.Reals.Raxioms, the integers are defined in Coq.ZArith.BinInt, and
the natural numbers are defined in Coq.Init.Datatypes and Coq.Numbers.BinNums. The last two
modules are distinct to reflect the two different constructions of natural numbers, one in base 10 and
one in binary. The rational numbers are defined in Coq.QArith.QArith_base and the Positive type is
defined in Coq.Numbers.BinNums. Unlike the previous binary number definition, the Positive type
excludes the number zero.
Formalizing a problem may require switching between these various types using an inbuilt set of
conversions, as seen in Figure 5. For example, comparing an integer with a real number may take
the form of r = IZR i, where r is a real number and i is an integer, with the comparison being
done in the Reals scope. These additional casting operations can introduce additional complexity in
our formalizations. Figure 24 illustrates the usage of various casting operations.
Mathcomp and GeoCoq are extension libraries for the Coq proof assistant. Mathcomp is a theorybased library in that it contains high-level structures for algebra and data structures. In order to
extend its functionality, the developers have created a refinement library called CoqEAL, which
contains a framework compatible with other representations like the numerical types found in the
Coq Standard Library. While there has been substantial work on these refinements, to the best of
our knowledge, it is currently not possible to instantiate matrices or groups of real type.
GeoCoq is a library built for geometry that operates off Tarski’s Axioms. Many problems have been
formalized using the vast amount of theorems based off these axioms. However, GeoCoq’s inbuilt
numbers (a field F) lacks compatibility with the numerical representation of Coq Reals. As such,
numerical expressions and computations using concrete numbers like 16 and 97 are not natively accommodated within GeoCoq’s framework. This limitation impacts our ability to represent numbers
in Coq formalizations.
-----
Putnam 2001 B4. Let S denote the set of rational numbers different from {−1, 0, 1}. Define
f : S → S by f (x) = x − 1/x. Prove or disprove that
∞
f [(][n][)](S) = ∅,
n\=1
where f [(][n][)] denotes f composed with itself n times.
abbrev putnam_2001_b4_solution : Prop := True
theorem putnam_2001_b4
(S : Set Q)
(hS : S = univ \ {-1, 0, 1})
(f : S → S)
(hf : ∀ x : S, f x = x - 1 / (x : Q))
: ∩ n ∈ Ici 1, f^[n] '' univ = ∅↔ putnam_2001_b4_solution
:= sorry
Figure 6: A formalization of Putnam 2001 B4 in Lean 4. As the problem requires deciding whether
the infinite intersection is empty, it is not directly the statement of a theorem. We consider the
associated “solution” of this problem to be a boolean value, and factor it out from the theorem
statement. sorry is the placeholder keyword for Lean.
Putnam 2020 A3. Let a0 = π/2, and let an = sin(an 1) for n ≥ 1. Determine whether
−
∞
a[2]n
nX=1
converges.
abbrev solution : Prop := False
theorem putnam_2020_a3
(a : N → R)
(ha0 : a 0 = Real.pi / 2)
(ha : ∀ n : N, n ≥ 1 → a n = Real.sin (a (n - 1)))
: (∃ L : R, Tendsto (fun m : N => Σ n : Icc 1 m, (a n)^2) atTop (N L))
↔ putnam_2020_a3_solution
:= sorry
Figure 7: A formalization of Putnam 2020 A3 in Lean 4. As the problem requires deciding whether
the series converges, it is not directly the statement of a theorem. We consider the associated “solution” of this problem to be a boolean value, and factor it out from the theorem statement.
-----
Putnam 1997 A4. Let G be a group with identity e and φ : G → G a function such that
φ(g1)φ(g2)φ(g3) = φ(h1)φ(h2)φ(h3)
whenever g1g2g3 = e = h1h2h3. Prove that there exists an element a ∈ G such that ψ(x) =
aφ(x) is a homomorphism.
theorem putnam_1997_a4
(G : Type*)
[Group G]
(ϕ : G → G)
(hϕ : ∀ g1 g2 g3 h1 h2 h3 : G, (g1 * g2 * g3 = 1 ∧ h1 * h2 * h3 = 1)
→ ϕ g1 * ϕ g2 * ϕ g3 = ϕ h1 * ϕ h2 * ϕ h3)
: ∃ a : G, let ψ := fun g => a * ϕ g; ∀ x y : G, ψ (x * y) = ψ x * ψ y
:= sorry
Figure 8: A formalization of Putnam 1997 A4, which requires knowledge of group theory, in Lean
4. The informal statement is slightly underspecified - g1, g2, g3, h1, h2, h3 are not explicitly defined
to be in G. To produce the formalization, we must be specific about the type of gi, hi.
Putnam 2018 B1. Let P be the set of vectors defined by
a
P = b [0][ ≤] [a][ ≤] [2][,][ 0][ ≤] [b][ ≤] [100][,][ and][ a, b][ ∈] [Z]
Find all v ∈P such that the set P\{v} obtained by omitting vector v from P can be partitioned
into two sets of equal size and equal sum.
abbrev putnam_2018_b1_solution : Set (Vector Z 2) :=
{v : Vector Z 2 | ∃ b : Z, 0 ≤ b ∧ b ≤ 100 ∧ Even b ∧ v.toList = [1, b]}
theorem putnam_2018_b1
(P : Finset (Vector Z 2))
(v : Vector Z 2)
(vinP : Prop)
(Pvdiff : Finset (Vector Z 2))
(Pvpart : Prop)
(hP : P =
{v' : Vector Z 2 | 0 ≤ v'[0] ∧ v'[0] ≤ 2 ∧ 0 ≤ v'[1] ∧ v'[1] ≤ 100})
(hvinP : vinP = (v ∈ P))
(hPvdiff : Pvdiff = P \ ({v} : Finset (Vector Z 2)))
(hPvpart : Pvpart = (∃ Q R : Finset (Vector Z 2),
(Q ∪ R = Pvdiff) ∧ (Q ∩ R = ∅) ∧ (Q.card = R.card) ∧
(Σ q in Q, q[0] = Σ r in R, r[0]) ∧ (Σ q in Q, q[1] = Σ r in R, r[1])))
: (vinP ∧ Pvpart) ↔ v ∈ putnam_2018_b1_solution := sorry
Figure 9: A formalization of Putnam 2018 B1, which requires the Vector class from mathlib4.
-----
Putnam 1992 B6. Let M be a set of real n × n matrices such that
1. I ∈M, where I is the n × n identity matrix;
2. if A ∈M and B ∈M, then exactly one of AB ∈M and −AB ∈M holds;
3. if A ∈M and B ∈M, then either AB = BA or AB = −BA;
4. if A ∈M and A ̸= I, there is at least one B ∈M such that AB = −BA.
Prove that M contains at most n[2] matrices.
theorem putnam_1992_b6:
fixes n :: nat
and M :: "(real^'n^'n) set"
assumes npos: "n > 0"
and pncard: "CARD('n) = n"
and h1: "mat 1 ∈ M"
and h2: "∀A∈M. ∀B∈M. (A**B ∈ M) ̸= (-A**B ∈ M)"
and h3: "∀A∈M. ∀B∈M. (A**B = B**A) ∨ (A**B = -B**A)"
and h4: "∀A∈M. (A ̸= mat 1 → (∃B∈M. A**B = -B**A))"
shows "card M ≤ n^2"
sorry
Figure 10: An Isabelle formalization of Putnam 1992 B6.
Putnam 2012 A3. Let f : [−1, 1] → R be a continuous function such that
1. f (x) = [2][−]2[x][2] f ( 2−x[2]x[2][ )][ for every][ x][ in][ [][−][1][,][ 1]][,]
[−]
2. f (0) = 1, and
f (x)
3. limx→1− √1−x exists and is finite.
Prove that f is unique, and express f (x) in closed form.
definition putnam_2012_a3_solution :: "real ⇒ real" where
"putnam_2012_a3_solution ≡ (λx::real. sqrt (1 - x^2))"
theorem putnam_2012_a3:
fixes S :: "real set"
and hf :: "(real ⇒ real) ⇒ bool"
defines "S ≡ {-1..1}"
and "hf ≡ (λf::real⇒real. continuous_on S f ∧
(∀x∈S. f x = ((2 - x^2)/2)*f (x^2/(2 - x^2))) ∧ f 0 = 1 ∧
(∃y::real. filterlim (λx::real. (f x)/sqrt (1 - x)) (nhds y) (at_left 1)))"
shows "hf putnam_2012_a3_solution ∧
(∀f::real⇒real. hf f → (∀x∈S. f x = putnam_2012_a3_solution x))"
sorry
Figure 11: An Isabelle formalization of Putnam 2012 A3. The mechanism for factoring the solution
out of the theorem statement is similar to that of Lean.
-----
Putnam 1980 A5. Let P (t) be a nonconstant polynomial with real coefficients. Prove that the
system of simultaneous equations
x
0 =
Z0
x
P (t) sin tdt =
Z0
P (t) cos tdt
has only finitely many real solutions x.
Theorem putnam_1980_a5
(n : nat)
(npos : gt n 0)
(coeff : nat -> R)
(hcoeff : coeff n <> 0)
(p : R -> R := fun x => sum_n (fun i => coeff i * x ^ i) (S n))
(h1 : nat -> Prop := fun a => RInt (fun x => p x * sin x) 0 (INR a) = 0)
(h2 : nat -> Prop := fun a => RInt (fun x => p x * cos x) 0 (INR a) = 0)
: exists (m: nat), forall (b: nat), h1 b /\ h2 b -> lt b m.
Proof. Admitted.
Figure 12: A Coq formalization of Putnam 1980 A5. This formalization is done using Coquelicot, a
Coq repository outside of the standard library. The Coq equivalent of sorry is Admitted.
Putnam 2017 B2. Suppose that a positive integer N can be expressed as the sum of k consecutive positive integers
N = a + (a + 1) + (a + 2) + · · · + (a + k − 1)
for k = 2017 but for no other values of k > 1. Considering all positive integers N with this
property, what is the smallest positive integer a that occurs in any of these expressions?
Definition putnam_2017_b2_solution := 16.
Theorem putnam_2017_b2
(mina : nat)
(posMin : mina > 0)
(A : nat -> nat -> nat := fun a k => Z.to_nat (floor (sum_n (fun i => Raxioms.
INR (a + (i + 1))) k)))
(p : nat -> nat -> Prop := fun N k => exists (a: nat), a > 0 /\ A a k = N)
(q : nat -> Prop := fun N => p N 2017 /\ forall (k: nat), k > 1 -> k <> 2017 ->
~ p N k)
(hmina : q (A mina 2017))
(hminalb : (forall (a: nat), a > 0 /\ q (A a 2017) -> mina <= a))
: mina = putnam_2017_b2_solution.
Proof. Admitted.
Figure 13: A Coq formalization of Putnam 2017 B2. As the problem requires a numerical witness,
we factor that out using Coq’s syntax for making definitions.
-----
Putnam 1988 B1. A composite is a product ab with a and b not necessarily distinct integers
{2, 3, 4, . . . }. Show that every composite is expressible as xy +xz +yz +1 with x, y, z positive
integers.
Require Import ZArith Znumtheory Lia.
Open Scope Z.
Theorem putnam_1988_b1:
forall (a : Z), a >= 2 ->
forall (b : Z), b >= 2 ->
exists (x y z: Z), x > 0 /\ y > 0 /\ z > 0 /\
a * b = x * y + y * z + z * x + 1.
Proof.
intros a Ha b Hb.
exists 1, (a - 1), (b - 1).
split.
- lia.
- split.
+ lia.
+ split.
- lia.
Qed.
Figure 14: A Coq proof of Putnam 1988 B1 generated through a few-shot invocation of GPT-4. The
proof is similar to that of the Lean version, also discovered by GPT-4. The main difficulty of the
problem is to choose the values of x, y, z given a, b. Once correctly supplied, the remainder of the
proof is routine and can be done with automated methods like lia which handles linear arithmetic.
theorem mathd_algebra_107
(x y : R)
(h0 : x^2 + 8 * x + y^2 - 6 * y = 0)
: (x + 4)^2 + (y-3)^2 = 5^2 := sorry
theorem mathd_numbertheory_85 :
1 * 3^3 + 2 * 3^2 + 2*3 + 2 = 53
:= sorry
Figure 15: Examples of formalizations of easy problems in MINIF2F. While useful for benchmarking straightforward mathematical reasoning in a formal setting, these problems are quite simple
compared to the competition problems present in PUTNAMBENCH. We note that MINIF2F does
include some formalizations of problems sourced directly from high school competitions, but these
are fewer in number.
-----
You are proficient at formal theorem-proving in Lean 4. Given a theorem
֒→ statement in Lean 4, generate the proof in Lean 4. You can assume that
֒→ you have access to Lean's mathlib library.
The theorem is described in the following format:
**1. The theorem statement using the `[THEOREM]` keyword.**
**3. The theorem description ends with the keyword `[END]`.**
Generate a Lean 4 proof for the theorem which starts with the keyword
֒→ `[PROOF]` followed by the proof of the theorem. The syntax for Lean 4
֒→ is different than that of Lean 3 - premises like "Nat.dvd_mul" and
֒→ "Finset.singleton_injective" exist in Lean 4, the equivalent in Lean 3
֒→ is "nat.dvd_mul" and "finset.singleton_injective" which DO NOT WORK in
֒→ Lean 4. Additionally, you cannot chain tactics into one step using ','
֒→ this will NOT work - you can use ';' instead but try to avoid such usage
֒→ where not necessary! When doing rewrites you MUST wrap the premise in
֒→ brackets: "rw [h]". If you want to do multiple rewrites at once you can
֒→ do something like "rw [step1, step2, step3]". Always predict one tactic
֒→ at a time, though you can predict the "have" tactic and may supply a
֒→ proof for it with tactics split by ";". You can provide witnesses to
֒→ consecutive existential quantifiers all at once, for example 'use 1, 2,
֒→ 3' but NOT as a list 'use [1, 2, 3]' - these are not the same things!
֒→ You can introduce with "intro" everything you think you can introduce at
֒→ once. In Lean 4, you can split apart conjunctions with "constructor" NOT
֒→ "split". You should use the "ring" tactic to handle goals that follow
֒→ from ring axioms, especially instead of doing a long series of rewrites
֒→ or calculations. Similarly, "linarith" can be useful for solving goals
֒→ involving linear arithmetic. Do NOT indent tactics, every new line
֒→ should not have spaces to start! PLEASE use Lean 4 syntax only! The
֒→ proof ends with the keyword `[END]`. Also please DO NOT write `sorry`
֒→ in the proof. You can assume that the theorem is provable.
Figure 16: Parts of the “system prompt” used by GPT-4 for Lean 4 evaluations. Due to GPT-4’s
tendency towards producing outputs in Lean 3 syntax, our prompt places special attention towards
preventing such syntactic mistakes. A similar modification is made to COPRA’s system prompt for
Lean 3.
-----
Goals to prove:
[GOALS]
[GOAL] 1
DifferentiableAt R (fun x => g x / hg0 x) 0 → DifferentiableAt R g 0
[HYPOTHESES] 1
[HYPOTHESIS] case mpr
[HYPOTHESIS] f : True
[HYPOTHESIS] g hg0 : R → R
[HYPOTHESIS] hcg : hg0 0 ̸= 0
[HYPOTHESIS] hfg : ContinuousAt hg0 0
[HYPOTHESIS] hfg_div : DifferentiableAt R (fun x => g x * hg0 x) 0
[STEPS]
[STEP] constructor
[STEP] intro h
[STEP] trivial
[STEP] intros f g hg0 hcg hfg hfg_div
[INCORRECT STEPS]
[STEP] apply differentiable_at_of_mul
[LAST STEP]
apply differentiable_at.div
[ERROR MESSAGE]
error: unknown identifier 'differentiable_at.div'
[END]
Figure 17: An example of a failed tactic prediction during proof search for Putnam 2011 B2 using
COPRA in Lean 4. GPT-4 predicts a tactic involving the premise “differentiable_at.div,” which
exists in Lean 3, but not Lean 4. Even with the system prompt asserting outputs should involve Lean
4 syntax alone, GPT-4 is not always capable of making the distinction.
theorem putnam_2001_a1
(S : Type*)
[Mul S]
(hS : ∀ a b : S, (a * b) * a = b)
: ∀ a b : S, a * (b * a) = b := by
intro a b
have h1 : (a * (b * a)) * a = b * a := by
rw ←[mul_assoc]
rw [hS]
rw [hS]
apply h1
Figure 18: A failed proof generated by few-shot invocation of GPT-4. GPT-4 misunderstands that
the hypothesis [Mul S], which gives an operation ⋆ and asserts it is a binary operation on S, also
asserts associativity of the operation. The tactic rw [←mul_assoc], which performs a rewrite using
the associativity of ⋆, is hence not applicable in this setting.
-----
Putnam 1986 B1. Inscribe a rectangle of base b and height h in a circle of radius one, and
inscribe an isosceles triangle in the region of the circle cut off by one base of the rectangle. For
what values of h do the rectangle and triangle have the same area?
theorem putnam_1986_b1:
fixes b h :: "real"
assumes hbh : "b > 0 ∧ h > 0 ∧ b^2 + h^2 = 2^2"
and areaeq : "b*h = 0.5 * b * (1 - h/2)"
shows "h = ( 2 / 5 )"
proof from assms have eq1: "b^2 + h^2 = 4" by simp
from areaeq have eq2: "b * h = 0.5 * b * (1 - h/2)" by simp
then have eq3: "b * h = 0.5 * b - 0.25 * b * h" by algebra
then have eq4: "1.25 * b * h = 0.5 * b" by algebra
then have eq5: "h = 0.5 / 1.25" using assms(1) by simp
then show ?thesis by simp
qed
Figure 19: A successful proof in Isabelle discovered using GPT-4. While the theorem statement
differs from the formalization, which is algebraic, we note that the official solution is also algebraic
and is similar to the generated Isabelle proof.
-----
Putnam 1995 A1. Let S be a set of real numbers which is closed under multiplication (that is,
if a and b are in S, then so is ab. Let T and U be disjoint subsets of S whose union is S. Given
that the product of any three (not necessarily distinct) elements of T is in T and the product of
any three elements of U is in U, show that at least one of the two subsets T, U is closed under
multiplication.
theorem putnam_1995_a1:
fixes S :: "real set"
and T U :: "real set"
assumes hS: "∀a∈S. ∀b∈S. a * b ∈ S"
and hsub: "T ⊆ S ∧ U ⊆ S"
and hunion: "(T ∪ U) = S"
and hdisj: "(T ∩ U) = {}"
and hT3: "∀a∈T. ∀b∈T. ∀c∈T. a * b * c ∈ T"
and hU3: "∀a∈U. ∀b∈U. ∀c∈U. a * b * c ∈ U"
shows "(∀a∈T. ∀b∈T. a * b ∈ T) ∨ (∀a∈U. ∀b∈U. a * b ∈ U)"
proof (rule ccontr)
assume "¬ ((∀a∈T. ∀b∈T. a * b ∈ T) ∨ (∀a∈U. ∀b∈U. a * b ∈ U))"
then have "∃t1∈T. ∃t2∈T. t1 * t2 /∈ T" "∃u1∈U. ∃u2∈U. u1 * u2 /∈ U" by auto
then obtain t1 t2 where t1t2: "t1 ∈ T" "t2 ∈ T" "t1 * t2 ∈ U"
using hS hunion by blast
then obtain u1 u2 where u1u2: "u1 ∈ U" "u2 ∈ U" "u1 * u2 ∈ T"
using ‹¬ ((∀a∈T. ∀b∈T. a * b ∈ T) ∨ (∀a∈U. ∀b∈U. a * b ∈ U))› hS hunion
by blast
have "t1 * t2 * t3 ∈ T" if "t3 ∈ T" for t3 using hT3 t1t2(1) t1t2(2) that
by auto
then have "t1 * t2 * t3 ∈ U" if "t3 ∈ T" for t3 using t1t2(3) hS that
by (metis disjoint_iff hU3 hdisj mult.commute u1u2(1) u1u2(2) u1u2(3))
then have False using hdisj t1t2(1) t1t2(2)
by (meson disjoint_iff_not_equal hT3)
then show False by blast
qed
Figure 20: A successful proof in Isabelle discovered using DSP. In particular, this proof is not found
using a single invocation of sledgehammer, so the sketching mechanism of DSP is crucial for this
problem. We note that the DSP pipeline involves using an LLM (GPT-4) to synthesize an informal
proof which is translated into a sketch in Isabelle - this can potentially be a source of indirect dataset
contamination, as we cannot ensure the informal proofs are not present in GPT-4’s training data.
-----
theorem putnam_1971_b1:
fixes Smul :: "'S ⇒ 'S ⇒ 'S" (infixl "*" 70)
assumes hself: "∀x::'S. x * x = x"
and h2: "∀x y z::'S. (x * y) * z = (y * z) * x"
shows "∀x y z::'S. (x * y) * z = x * (y * z) ∧ x * y = y * x"
proof
have comm: "∀x y::'S. x * y = y * x"
proof
fix x y :: 'S
have "(x * y) * x = (y * x) * x" using h2 by blast (* sledgehammer *)
also have "... = y * x" using hself by (metis h2) (* sledgehammer *)
finally have "(x * y) * x = y * x" by simp (* sledgehammer *)
then have "x * y = y * x" using hself by (metis h2) (* sledgehammer *)
thus "x * y = y * x" by simp
qed
have assoc: "∀x y z::'S. (x * y) * z = x * (y * z)"
proof
fix x y z :: 'S
have "(x * y) * z = (y * z) * x" using h2 sledgehammer
also have "... = x * (y * z)" using comm sledgehammer
finally show "(x * y) * z = x * (y * z)" by simp
qed
show "∀x y z::'S. (x * y) * z = x * (y * z) ∧ x * y = y * x"
using assoc comm by simp
qed
Figure 21: An erroneous generation produced by DSP for Putnam 1971 B1. While a single invocation of Sledgehammer can prove this problem, the formal sketch generated in the pipeline is much
more complex and is erroneous, leading to a failed proof attempt.
theorem putnam_1971_b1:
fixes Smul :: "'S ⇒ 'S ⇒ 'S" (infixl "*" 70)
assumes hself: "∀x::'S. x * x = x"
and h2: "∀x y z::'S. (x * y) * z = (y * z) * x"
shows "∀x y z::'S. (x * y) * z = x * (y * z) ∧ x * y = y * x"
theorem putnam_2012_a2:
fixes Smul :: "'S ⇒'S ⇒ 'S" (infixl "*" 70)
and a b c :: 'S
assumes Smulasg: "abel_semigroup Smul"
and hS: "∀x y::'S. ∃z::'S. x * z = y"
and habc: "a * c = b * c"
shows "a = b"
Figure 22: The other two Isabelle formalizations solved with invocations of Sledgehammer. We note
that the problems Sledgehammer was capable of solving are all problems involving binary operations
on sets. It is not surprising that SMT solvers are capable of solving such problems, which do not
require reasoning about complicated objects.
-----
Goals to prove:
[GOALS]
[GOAL] 1
1 > 0 /\
1 > 0 /\
a * b - 1 > 0 /\ a * b = 1 * 1 + 1 * (a * b - 1) + (a * b - 1) * 1 + 1
[HYPOTHESES] 1
[HYPOTHESIS] a : Z
[HYPOTHESIS] Ha : a >= 2
[HYPOTHESIS] b : Z
[HYPOTHESIS] Hb : b >= 2
[STEPS]
[STEP] intros a Ha b Hb.
[LAST STEP]
exists 1, 1, (a * b - 1).
[SUCCESS]
[END]
...
Goals to prove: # Step 32 of Search
[GOALS]
[GOAL] 1
a - 1 > 0 /\
1 > 0 /\
b - 1 > 0 /\ a * b = (a - 1) * 1 + 1 * (b - 1) + (b - 1) * (a - 1) + 1
[HYPOTHESES] 1
[HYPOTHESIS] a : Z
[HYPOTHESIS] Ha : a >= 2
[HYPOTHESIS] b : Z
[HYPOTHESIS] Hb : b >= 2
[STEPS]
[STEP] intros a Ha b Hb.
[LAST STEP]
exists (a - 1), 1, (b - 1).
[SUCCESS]
[END]
Figure 23: Early in COPRA’s attempt on Putnam 1988 B1, an incorrect prediction of x, y, z given
a, b is made, which dooms that path of search as the most crucial step is the correct choice. Later,
at step 32 of search, COPRA backtracks and then successfully predicts a correct choice for x, y, z.
Once this step is generated, the remainder of the proof is straightforward.
-----
Z inject_Z Q
IZR
Z.of _nat Z.to_nat
INR
floor
nat
nat
Figure 24: A diagram of conversion methods for the Coq Standard Library.
-----
| [
"Amitayush, Thakur",
"George, Tsoukalas",
"Jasper, Lee",
"Swarat, Chaudhuri",
"John, Jennings",
"Jimmy, Xin",
"Michelle, Ding",
"Michael, Jennings"
] | 2024-07-15T00:00:00 | NeurIPS 2024 | true | 1 | 0 | [
"Lean",
"Isabelle",
"Coq"
] | http://arxiv.org/abs/2407.11214 | https://arxiv.org/abs/2407.11214 | https://www.semanticscholar.org/paper/6354569d60b80c85b7bd557b80e6d9a5cf719d6e |
REFACTOR: Learning to Extract Theorems from Proofs | Human mathematicians are often good at recognizing modular and reusable theorems that make complex mathematical results within reach. In this paper, we propose a novel method called theoREm-from-prooF extrACTOR (REFACTOR) for training neural networks to mimic this ability in formal mathematical theorem proving. We show on a set of unseen proofs, REFACTOR is able to extract 19.6\% of the theorems that humans would use to write the proofs. When applying the model to the existing Metamath library, REFACTOR extracted 16 new theorems. With newly extracted theorems, we show that the existing proofs in the MetaMath database can be refactored. The new theorems are used very frequently after refactoring, with an average usage of 733.5 times, and help shorten the proof lengths. Lastly, we demonstrate that the prover trained on the new-theorem refactored dataset proves more test theorems and outperforms state-of-the-art baselines by frequently leveraging a diverse set of newly extracted theorems. | This paper shows on a set of unseen proofs, REFACTOR is able to extract 19.6% of the theorems that humans would use to write the proofs, and demonstrates that the prover trained on the new-theorem refactored dataset proves more test theorems and outperforms state-of-the-art baselines by frequently leveraging a diverse set of newly extracted theorems. | # REFACTOR: LEARNING TO EXTRACT THEOREMS
## FROM PROOFS
**Anonymous authors**
Paper under double-blind review
ABSTRACT
Human mathematicians are often good at recognizing modular and reusable theorems that make complex mathematical results within reach. In this paper, we
propose a novel method called theoREm-from-prooF extrACTOR (REFACTOR)
for training neural networks to mimic this ability in formal mathematical theorem
proving. We show on a set of unseen proofs, REFACTOR is able to extract 19.6%
of the theorems that humans would use to write the proofs. When applying the
model to the existing Metamath library, REFACTOR extracted 16 new theorems.
With newly extracted theorems, we show that the existing proofs in the MetaMath
database can be refactored. The new theorems are used very frequently after refactoring, with an average usage of 733.5 times, and help to shorten the proof lengths.
Lastly, we demonstrate that the prover trained on the new-theorem refactored
dataset proves relatively 14-30% more test theorems by frequently leveraging a
diverse set of newly extracted theorems.
1 INTRODUCTION
In the history of calculus, one remarkable early achievement was made by Archimedes in the 3rd
century BC, who established a proof for the area of a parabolic segment to be 4/3 that of a certain
inscribed triangle. In the proof he gave, he made use of a technique called the method of exhaustion,
a precursor to modern calculus. However, as this was a strategy rather than a theorem, applying
it to new problems required one to grasp and generalize the pattern, as only a handful of brilliant
mathematicians were able to do. It wasn’t until millennia later that calculus finally became a powerful
and broadly applicable tool, once these reasoning patterns were crystallized into modular concepts
such as limits and integrals.
A question arises – can we train a neural network to mimic humans’ ability to extract modular
components that are useful? In this paper, we focus on a specific instance of the problem in the
context of theorem proving, where the goal is to train a neural network model that can discover
reusable theorems from a set of mathematical proofs. Specifically, we work under formal systems
where each mathematical proof is represented by a tree called proof tree. Moreover, one can extract
some connected component of the proof tree that constitutes a proof of a standalone theorem. Under
this framework, we can reduce the problem to training a model that solves a binary classification
problem where it determines whether each node in the proof tree belongs to the connected component
that the model tries to predict.
To this end, we propose a method called theoREm-from-prooF extrACTOR (REFACTOR) for
mimicking humans’ ability to extract theorems from proofs. Specifically, we propose to reverse the
process of human theorem extraction to create machine learning datasets. Given a human proof T,
we take a theorem s that is used by the proof. We then use the proof of theorem s, Ts, to re-write T as
_T_ such that T no longer contains the application of theorem s, and replace it by using the proof Ts.
_[′]_ _[′]_
We call this re-writing process the expansion of proof T using s. The expanded proof T _[′]_ becomes
the input to our model, and the model’s task is to identify a connected component of T, Ts, which
_[′]_
corresponds to the theorem s that humans would use in T .
We implement this idea within the Metamath theorem proving framework – an interactive theorem
proving assistant that allows humans to write proofs of mathematical theorems and verify the
correctness of these proofs. Metamath is known as a lightweight theorem proving assistant, and
hence can be easily integrated with machine learning models (Whalen, 2016; Polu & Sutskever,
-----
2020). It also contains one of the largest formal mathematics libraries, hence providing sufficient
background for proving university-level or Olympiad mathematics. While our approach would be
applicable to other formal systems (such as Lean (de Moura et al., 2015), Coq (Barras et al., 1999), or
HOL Light (Harrison, 1996)), we chose Metamath for this project because of its features for reduced
iteration time in the near term.
Our work establishes the first proof of concept using neural network models to extract theorems from
proofs. Our best REFACTOR model is able to extract exactly the same theorem as humans’ ground
truth (without having seeing instances of it in the training set) about 19.6% of time. We also observe
that REFACTOR’s performance improves when we increase the model size, suggesting significant
room for improvement with more computational resources.
Ultimately, the goal is not to recover known theorems but to discover new ones. To analyze those cases
where REFACTOR’s predictions don’t match the human ground truth, we developed an algorithm
to verify whether the predicted component constituent a valid proof of a theorem, and we found
REFACTOR extracted 1907 valid, new theorems. We also applied REFACTOR to proofs from the
existing Metamath library, from which REFACTOR extracted another 16 novel theorems. Remarkably,
those 16 proofs are used very frequently in the Metamath library, with an average usage of 733.5
times. Furthermore, with newly extracted theorems, we show that the human theorem library can be
refactored to be more concise: the extracted theorems reduce the total size by approximately 400k
nodes. (This is striking since REFACTOR doesn’t explicitly consider compression as an objective.)
Lastly, we demonstrate that training a prover on the refactored dataset leads to a 14-30% relative
improvement on proof success rates in proving new test theorems. Out of all proved test theorems,
there are 43.6% of them use the newly extracted theorems at least once. The usages also span across
a diverse set of theorems: 141 unique newly extracted theorems are used, further suggesting diverse
utility in new theorems we extracted.
Our main contributions are as follows: 1. We propose a novel method called REFACTOR to train
neural network models for the theorem extraction problem, 2. We demonstrate REFACTOR can
extract unseen human theorems from proofs with a nontrivial accuracy of 19.6%, 3. We show
REFACTOR is able to extract frequently used theorems from the existing human library, and as
a result, shorten the proofs of the human library by a substantial amount. 4. We show newtheorem refactored dataset can improve baseline theorem prover performance significantly with newly
extracted theorem being used frequently and diversely.
2 RELATED WORK
**Lemma Extraction** Our work is generally related to lemma mining in Vyskocil et al. (2010); Hetzlˇ
et al. (2012); Gauthier & Kaliszyk (2015); Gauthier et al. (2016) and mostly related to the work
of Kaliszyk & Urban (2015); Kaliszyk et al. (2015). The authors propose to do lemma extraction
on the synthetic proofs generated by Automated Theorem Provers (ATP) on the HOL Light and
Flyspeck libraries. They showed the lemma extracted from the synthetic proofs further improves the
ATP performances for premise selection. However, their proposed lemma selection methods require
human-defined metrics and feature engineering, whereas we propose a novel way to create datasets
for training a neural network model to do lemma/theorem selection. Unfortunately, as the Metamath
theorem prover is not equipped with ATP automation to generate synthetic proofs, we could not easily
compare our method to these past works. We leave more thorough comparisons on the other formal
systems to future work.
**Discovering Reusable Structures** Our work also is related to a broad question of discovering
reusable structures and sub-routine learning. One line of the work that is notable to mention is the
Explore-Compile-style (EC, EC2) learning algorithms (Dechter et al., 2013; Ellis et al., 2018; 2020).
These works focus on program synthesis while trying to discover a library of subroutines. As a
subroutine in programming serves a very similar role as a theorem for theorem proving, their work is
of great relevance to us. However they approach the problem from a different angle: they formalize
sub-routine learning as a compression problem, by finding the best subroutine that compresses the
explored solution space. However, these works have not yet been shown to be scalable to realistic
program synthesis tasks or theorem proving. We, on the other hand, make use of human data to
create suitable targets for subroutine learning and demonstrate the results on realistic formal theorem
-----
N: wps N: wph N: wph N: wps
PROP: wffps PROP: wffph PROP: wffph PROP: wffps
N: wph N: wi N: a1i.1 N: ax-1
PROP: wffph PROP: wff(ps->ph) PROP: |-ph PROP: |-(ph->(ps->ph))
N: ax-mp
PROP: |-(ps->ph)
N: wph N: wps N: mp1i.a N: mp1i.b
PROP: wffph PROP: wffps PROP: |-ph PROP: |-(ph->ps)
N: wps N: wch N: ax-mp
PROP: wffps PROP: wffch PROP: |-ps
N: a1i
PROP: |-(ch->ps)
(a) The proof tree of a1i.
(b) The proof tree of mp1i.
N: wch N: wps N: wph N: wps N: mp1i.a N: mp1i.b N: wps N: wch
PROP: wffch PROP: wffps PROP: wffph PROP: wffps PROP: |-ph PROP: |-(ph->ps) PROP: wffps PROP: wffch
N: wps N: wi N: ax-mp N: ax-1
PROP: wffps PROP: wff(ch->ps) PROP: |-ps PROP: |-(ps->(ch->ps))
N: ax-mp
PROP: |-(ch->ps)
(c) The proof tree of mp1i with theorem a1i’s proof expanded (colored in blue).
Figure 1: In (a) and (b), we show proof tree visualizations of the theorem a1i and mp1i. Each node
contains two pieces of information: N refers to the the name associated with the node, and PROP
refers to the proved proposition that is obtained by applying all theorem applications above that node.
In (c), we also show the expanded proof tree of mp1i with a1i’s proof being expanded and colored
in blue, namely, the set of nodes Vtarget that are the targets for our proposed learning task.
proving. Another related line of work build inductive biases to induce modular neural networks that
can act as subrountines (Andreas et al., 2015; Gaunt et al., 2017; Hudson & Manning, 2018; Mao
et al., 2019; Chang et al., 2019; Wu et al., 2020). These works usually require domain knowledge of
sub-routines for building neural architectures hence not suitable for our application.
**Machine Learning for Theorem Proving** Interactive theorem provers have recently received
enormous attention from the machine learning community as a testbed for theorem proving using
deep learning methods (Bansal et al., 2019a;b; Gauthier et al., 2018; Huang et al., 2019; Yang &
Deng, 2019; Wu et al., 2021; Li et al., 2021; Polu & Sutskever, 2020). Previous works demonstrated
that transformers can be used to solve symbolic mathematics problems (Lample & Charton, 2020),
capture the underlying semantics of logical problems relevant to verification (Hahn et al., 2020), and
also generate mathematical conjectures (Urban & Jakubuv, 2020). Rabe et al. (2020) showed that˚
self-supervised training alone can give rise to mathematical reasoning. Li et al. (2021) used language
models to synthesize high-level intermediate propositions from a local context. Piotrowski & Urban
(2020) used RNNs to solve first-order logic in ATPs. Wang et al. (2020) used machine translation
to convert synthetically generated natural language descriptions of proofs into formalized proofs.
Yang & Deng (2019) augmented theorem prover with shorter synthetic theorems which consist of
arbitrary steps from a longer proof with maximum length restriction. This is remotely related to our
work where our extraction does not have such restrictions and instead closely mimic what human
mathematicians would do.
3 METAMATH AND PROOF REPRESENTATION
In this section, we describe how one represents proof in the Metamath theorem proving environment. We would like to first note that even though the discussion here specializes in the Metamath
environment, most of the other formal systems (Isabelle/HOL, HOL Light, Coq, Lean) have very
similar representations. The fundamental idea is to think of a theorem as a function, and the proof
tree essentially represents an abstract syntax tree of a series of function applications that lead to the
intended conclusion.
Proof of a theorem in the Metamath environment is represented as a tree. For example, the proof of
the theorem a1i is shown in Figure 1 (a). Each node of the tree is associated with a name (labeled as
N), which can refer to a premise of the theorem, an axiom, or a proved theorem from the existing
theorem database. Given such a tree, one can then traverse the tree from the top to bottom, and
iteratively prove a true proposition (labeled as PROP) for each node by making a step of theorem
-----
_application. The top-level nodes usually represent the premises of the theorem, and the resulting_
proposition in the bottom node matches the conclusion of the theorem. In such a way, the theorem is
proved.
We now define one step of theorem application. When a node is connected by a set of parent nodes,
it represents a step of theorem application. In particular, one can think of a theorem as a function
that maps a set of hypothesis to a conclusion. Indeed, a node in the tree exactly represents such
function mapping, that is to map the set of propositions of the parent nodes, to a new conclusion
specified by the theorem. Formally, given a node c whose associated name refers to a theorem T, we
denote its parent nodes as _c. We can then prove a new proposition by applying the theorem T_, to all
_P_
propositions proved by nodes in _c._
_P_
The proof of the theorem a1i in Figure 1 (a) consists of 3 theorem applications. In plain language,
the theorem is a proof of the fact that if ph is true, then (ps->ph) is also true. The top-level
nodes are the hypotheses of the theorem. Most of the hypotheses state that some expression is a
well-formed formula so that the expression can be used to form a syntactically correct sentence.
The more interesting hypothesis is a1i.1, which states |-ph, meaning ph is assumed to be true.
In the bottom node, the theorem invokes the theorem ax-mp, which takes in four propositions as
hypotheses, and returns the conclusion |-(ps->ph).
4 METHOD
In this section, we describe our approach to training neural network models for extracting useful
theorems from proofs. Our approach inspects one proof at a time and this intuition comes from the
fact that human mathematicians do not need to look at multiple proofs and can instead determine
whether a proof segment is broadly applicable just from the current proof. As one can represent
mathematical proofs as trees, we first discuss how to identify a connected component of the tree
with a valid proof of another theorem. We then formalize the problem of theorem extraction as
a node-level binary classification problem on the proof tree. Next, we propose an algorithm that
expands a theorem’s proof inside of another proof, to create suitable targets for learning theorem
extraction. Finally, we give an algorithm that verifies if the component predicted by the model
constitutes a valid proof of a theorem, and if so, turns the component into a theorem.
4.1 SUB-COMPONENT OF A PROOF TREE AS A THEOREM
We have discussed how one can represent a mathematical proof as a proof tree in section 3. Interestingly, one can also identify some components of the proof tree with an embedded proof of
another theorem. To start with, given a node in a proof tree, one can treat the entire subtree above that
node as a proof of the node (more precisely, the proposition contained in the node, i.e., PROP). For
example, in the proof of a1i, the subtree above the node ax-1 consists of two hypotheses wffph
and wffps, and they constitute a proof of the proposition |-(ph->(ps->ph)) contained in the
node ax-1.
In addition to the entire subtree above a node, one may identify some connected component of the
tree with a valid theorem. For example, in Figure 1 (c), we show that the proof of the theorem mp1i
contains an embedded proof of the theorem a1i. The embedded proof is colored in blue, and there is
a one-to-one correspondence between these blue nodes and the nodes in the proof of a1i shown in
Figure 1 (a). One can hence refactor the proof with an invocation of the theorem a1i, resulting in a
much smaller tree shown in Figure 1 (b).
In general, there are certain criteria a component needs to satisfy to be identified as a valid proof of a
theorem. In Appendix A.2, we develop such an algorithm in more detail that performs the verification
for theorem extraction. We will use that to verify the prediction given by a neural network model.
To conclude, in this section, we establish the equivalence between theorem extraction from a proof as
to the extraction of a sub-component from a proof tree. This allows us to formalize the problem as a
node-level prediction problem on graphs as we introduce next.
4.2 SUPERVISED PREDICTION TASK
The model is given a proof tree G with a set of nodes V, edges E, and node features xv which
correspond to the name N and the proposition PROP associated with each node. The task of the model
-----
is to output a subset of nodes target that correspond to an embedded proof of a useful theorem.
We cast the problem as a node-level binary classification problem that predicts whether each node V _⊂V_
belongs to Vtarget. Without loss of generality, we let all nodes in Vtarget to have labels of 1 and the
rest 0.
We use a graph neural network parametrized by θ to take a single graph and its node feature as input,
and outputs a scalar _P[ˆ]v between 0 and 1 for each node v ∈V, representing the probability belonging_
to target. Our objective is a binary cross entropy loss between the node level probabilities and the
_V_
ground truth target for a graph. Because the number of nodes usually varies significantly across
proofs, we normalize the loss by the number of nodes in the graph[1]:
_L(G, θ) = −_ [1]
_|V|_
_−_ [1]
_|V|_
log P ( P[ˆ]v = 1|G, θ) (1)
_v∈VXtarget_
log P ( P[ˆ]v = 0|G, θ) (2)
_v /∈VXtarget_
We then seek the best parameters by minimizing the loss over all proof trees:
_L(G, θ)._ (3)
argmin
4.3 REFACTOR: THEOREM-FROM-PROOF EXTRACTOR
With the prediction task formulated, we now describe how to generate training data points of proof
trees G with suitable targets Vtarget defined. Even though we specialize our discussion in the context
of Metamath, the same technique can be applied to other formal systems for creating datasets of
theorem extraction, such as Lean (de Moura et al., 2015).
It is worth noting that even though the existing human proofs from the Metamath library cannot be
used directly, they offer us hints as to how to construct training data points. To illustrate, in Figure
1 (b), the proof of mp1i invokes a theorem application with a1i, which is a theorem that human
considered useful and stored in the library. Our idea is to reverse the process of theorem extraction,
by expanding the proof of a1i in the proof of mp1i to obtain a synthetic proof shown in 1 (c). In
this expanded proof of mp1i, one can see the proof of a1i is embedded as a component colored in
blue, hence creating a suitable target for theorem extraction.
We explain how we perform the proof expansion in detail. We think of the theorem as a function
whose arguments are a set of hypotheses and the output is a conclusion, as mentioned in 3. Instead of
calling the theorem by its name, we intentionally duplicate the body of its proof tree, and replace
their nominal arguments with the arguments we wish to pass in context. There are three key steps: 1.
identifying the proof tree associated to the theorem (e.g., a1i in Figure 1 (a)), substituting nominal
arguments with the ones in the proof context (e.g., substituting leaf nodes wffph, wffps and |-ph
in Figure 1 (a) with nodes wffps, wffch and |-ps in Figure 1 (b) respectively[2]), and finally copy
and replace it to where the expanded node is located (e.g, replace a1i node in Figure 1 (b) with the
substituted a1i to arrive at Figure 1 (c)). We present a more formal and detailed exposition of the
algorithm in Appendix A.1.
Lastly, note that there are many options for theorem expansion. Firstly, one single proof can contain
multiple theorems, and each theorem can be expanded either simultaneously or one by one. In
addition, one can even recursively expand theorems by expanding the theorem inside of an expanded
proof. For simplicity, in this work, we only expand one theorem at a time, and for every theorem
in a proof. Hence, for a proof that contains M total number of theorem applications, we create M
data points for learning theorem extraction. We leave investigations of more sophisticated expansion
schemes to future work.
1In our preliminary experiments we found that the normalized loss gave better performance than weighting
all nodes in the database equally.
2Note that these three nodes in Figure 1 (b) are parents, namely, arguments to a1i node in Figure 1 (b).
-----
10[3]
10[2]
10[1]
|Col1|Col2|Col3|Col4|Col5|m|ean|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||m|edian|||
||||||||||
||||||||||
|Col1|Col2|Col3|mean|
|---|---|---|---|
||mean median||mean median|
|||||
|||||
|0 20 40 # Occ (a) e 2: Numbe b) show noti mum 10 occ ly extracts sh 1: Node lev : all the edge tion of the p pt all the edg ures: wheth K = 10 and|60 80 100 urrence r of theorems ceable occur urrence. (c) ort theorems el and proof s in the grap aths that go es are revers er or not the d = 256.||102 1 ew Theorem t (b). Both ( sampling of d. The mod reds of node gurations. N re in the sam eaves Ro → edges. Nod ments are ru|
|||Training Node Accuracy Training Proof Accuracy Test Node Accuracy T|est Proof Accurac|
|No edge + No Leaves→Root + Leaves←Root + Leaves↔ s↔Root + Node F|de Features Node Features Node Features Root eatures (REFACTO|86.8% 0.1% 74.9% 87.1% 0.5% 75.2% 96.6% 6.0% 88.1% 86.3% 0% 74.2% R) 97.5% 37.5% 84.3%|0.1% 0.1% 3.5% 0% 13.3%|
EXPERIMENTS
In this section, we evaluate the performance of our theorem extraction method via a variety of
experiments. We begin by describing our dataset and experimental setup and then analyze the results
to address the following research questions:
- Q1: How does REFACTOR perform when evaluating against ground truth theorem under a
variety of ablations of data and model architectures?
- Q2: Are newly extracted theorems by REFACTOR used frequently?
- Q3: With newly extracted theorems, can we (a) compress the existing theorem library and
(b) improve theorem proving?
5.1 DATASET AND PRE-PROCESSING
We applied REFACTOR to create datasets from the main and largest library of Metamath, set.mm.
In order to fairly compare prover performance reported from Whalen (2016), we used their version of
set.mm, which contains 27220 theorems. We also filtered out all expanded proofs with more than
1000 nodes or contain nodes features of character length longer than 512. This gave rise to 257264
data points for training theorem extraction before theorem maximum occurrence capping, which we
describe next.
We noted that the distribution of theorem usage in set.mm is highly imbalanced. To prevent the
model from learning to only extract a few numbers of common theorems due to their pervasiveness,
we employed a subsampling of the data with respect to theorem occurrence to balance the dataset.
Specifically, in the training set, for those theorems that occur more than 100 times as extraction
targets, we subsampled 100 data points per theorem. In Figure 2 (a), we plot a histogram of theorem
occurrence versus the number of theorems. As seen in the figure, the distribution roughly follows a
power-law distribution with 4000 theorems only used once in set.mm, and a substantial number of
theorems that occur beyond 100 times. For the validation and test set, as we wanted to evaluate the
model on a diverse set of extraction targets, we capped the maximum number of occurrences as 10
using subsampling. The occurrence histogram of the test dataset is shown in Figure 2 (b) and the total
number of expanded proofs in our dataset after capping theorem maximum occurrence is 124294.
-----
To evaluate the model’s generalization ability, we performed a target-wise split on the dataset. That
is, we split the dataset in a way that the prediction targets, namely, the theorems to be extracted,
are different for the train, valid and test set. By doing so, we discouraged simple memorization of
common theorems and extracting them from unseen proofs.
5.2 MODEL ARCHITECTURE AND TRAINING PROTOCOL
In this section, we describe our neural network architecture parameters and other training details. We
used a character-level tokenization for the node feature, which is a concatenation of texts in the fields
N and PROP (see Figure 1). For each node, we first embedded all the characters with an embedding
matrix, followed by two fully connected layers. We then averaged over all embeddings to obtain a
vector representation of a node. We used these vector representations as the initial node embeddings
to a graph neural network. We used K GraphSage convolution (Hamilton et al., 2017) layers with
size d and two more fully connected layers with sigmoid activation at the end to output the scalar
probability. The size of the character embedding was set to 128 and the number of hidden neurons in
all the fully connected layers was set to 64. Both K and d are hyperparameters.
For all of our model training, we used a learning rate of 1e-4 with Adam optimizer (Kingma & Ba,
2015). All methods were implemented in Pytorch[3] and Pytorch Geometric library [4]. We ran all
experiments on one NVIDIA Quadro RTX 6000, with 4-core CPUs.
5.3 Q1 - HOW MANY HUMAN-DEFINED THEOREMS DOES THE MODEL EXTRACT?
On the theorem extraction dataset obtained from Section 5.1, REFACTOR was able to correctly
classify 85.6% (Node Accuracy) of the nodes. For 19.6% (Proof Accuracy) of the proofs, REFACTOR
was able to correctly classify all of the nodes and fully recover the theorem that the human use. We
also show that our approach scales well with the model size (Table 2). As we increase the model by
around 50x from 80k to 4M, both node and proof accuracy improve. In particular, the proof accuracy
goes up significantly from 2.3% to 19.6%. This shows promise that the accuracy can be further
improved by using a larger model with a larger dataset.
To understand what mechanism in the GNN made the theorem extraction possible, we re-trained the
model, but with different configurations compared to the original training procedure. In particular, we
examined the case where all the edges are removed (No edge) as well as two types of uni-directional
connections: 1) only edges that go from leaves to root are included (Leaves→Root) and 2) only edges
that go from root to leaves are included (Leaves←Root). In addition, we were curious to see whether
the graph structure alone is sufficient for theorem prediction when no node features are provided.
For all the experiments, we used a model with K = 10 and d = 256. We summarize the results of
these data configurations in Table 1 and report node level and proof level accuracy on training and
test set. It can be seen that both edge connection and input node feature information is crucial in this
task as both (No edge + Node Features) and (Leaves↔Root) achieved minimum proof level accuracy.
Interestingly, the direction of edge led to a drastically different performance. Leaves→Root + Node
Features performs poorly in proof level accuracy whereas Leaves←Root + Node Features achieved
comparable performance with bidirectional edges (Leaves↔Root + Node Features).
This phenomenon can be explained by recognizing the fact that there are many identical hypothesis
nodes in a proof due to MetaMath’s low-level nature. For example, there are three identical leaf
nodes wps in Figure 1 (c). If the edges only point from hypothesis to conclusion, the message for
two identical hypothesis leaves will always be the same due to no incoming messages. Hence, it
is theoretically impossible to make correct predictions on the proof level. On the other hand, the
opposite direction of edges does not suffer from this limitation as there is only one root in the proof
tree. Empirically, this configuration is able to achieve decent performance, but still far behind the
performance of the model with bi-directional edges.
5.4 Q2 - ARE NEWLY EXTRACTED THEOREMS BY REFACTOR USED FREQUENTLY?
In this section, we investigate whether theorems extracted by REFACTOR are used frequently. We
used the best model (i.e., the largest model) in Table 2 for the results analyzed in this section. We
3https://pytorch.org/
4https://pytorch-geometric.readthedocs.io/en/latest/
-----
Table 2: Node level and proof level accuracy of REFACTOR with various model sizes.
_K, d, Number of Trainable Parameters_ Training Node Accuracy Training Proof Accuracy Test Node Accuracy Test Proof Accuracy
5, 64, 80k 89.4% 5.1% 77.4% 2.3%
5, 128, 222k 91.3% 9.9% 78.6% 3.0%
5, 256, 731k 93.7% 17.3% 80.1% 4.4%
10, 256, 1206k 97.5% 37.5% 84.3% 13.3%
10, 512, 4535k 97.9% 42.7% 85.6% 19.6%
Table 3: An analysis of incorrect predictions on the
theorem extraction dataset.
Dataset Total Not Tree & Invalid Tree & Invalid Tree & Valid
Training 64349 13368 47521 3460
Validation 4766 1175 3238 353
Test 4822 1206 3348 328
set.mm 22017 8182 13470 365
Table 4: Proof success rate comparison. New
theorem usage for REFACTOR is averaged
across 1 and 5 min setting.
Setting 1 min 5 min New Theorem Usage
Holophrasm (Whalen, 2016) - 14.3% -
Holophrasm (ours) 11.5% 15.1% -
REFACTOR **14.9%** **17.2%** 43.0%
explored two ways of extracting new theorems. We first investigated the incorrect predictions of
REFACTOR on the theorem extraction dataset. When the prediction differs from the ground truth, it
can correspond to a valid proof. We also applied REFACTOR on the human proofs of nodes less than
5000 from the library set.mm. In both cases, we first need to verify the validity of the extracted
components using the algorithm developed in details in Appendix A.2.
The number of valid theorems from the incorrect predictions on the theorem extraction dataset, and
the predictions on set.mm are listed under Tree & Valid in Table 3. We observe that there were
a non-trivial amount of predictions that led to valid theorems. Remarkably, we see REFACTOR
was able to extract valid theorems in the real human proofs (set.mm), despite the fact that human
proof distribution may be very different from the training distribution. Adding up all extracted
theorems from both approaches, we arrived at 4204 new theorems. We notice that among them,
some new theorems were duplicates of each other due to standardization and we kept one copy of
each by removing all other duplicates. We also removed 302 theorems extracted on set.mm that
corresponded to the entire proof tree. In the end, we were left with 1923 unique new theorems with
1907 and 16 from the expanded and original dataset respectively. We showed examples of extracted
new theorems in the Appendix B.1. We also plot the distribution of number of proof nodes of the
extracted theorems in Figure 2 (c). We can see the newly extracted theorems are of various sizes,
spanning almost two orders of magnitudes.
We then computed the number of usages in set.mm for each newly extracted theorem, reported in
Table 5. The average number of uses is 83 times, showing nontrivial utility of these theorems. Notably,
the theorems extracted on set.mm are even more frequently used – 733.5 times on average. We
think that because the human library is already quite optimized, it is harder to extract new theorems
from existing proofs. But a successful extraction is likely to be of higher quality as the proof tree
input represents a true human proof rather than a synthetically expanded proof.
We additionally performed a more detailed analysis on the predictions, by classifying them into
three categories. The first category is denoted by Non-Tree & Invalid where the prediction is a
disconnected set of nodes and hence it is impossible to form a new theorem. In the second category
_Tree & Invalid, the prediction is a connected component and hence forming a sub-tree, but it still does_
not satisfy other conditions outlined in our algorithm description to be a valid proof of a theorem.
The last category Tree & Valid corresponds to a prediction that leads to an extraction of new theorem
previously not defined by humans. We present the number of predictions for each category in Table 3.
Surprisingly, we noticed the model predicted a substantial amount of disconnected components. We
hypothesize this may be because our current model makes independent node-level predictions. We
believe an autoregressive model has a great potential to fix this problem by encouraging contiguity, a
direction which we leave for future work.
5.5 Q3A - HOW MUCH CAN WE COMPRESS THE EXISTING LIBRARY USING THE EXTRACTED
THEOREMS?
When the newly extracted theorems are broadly reusable, we would expect the proofs in the library
could be shortened by using the new theorems as part of the proofs. In this paper, we consider a
specific re-writing procedure, which alternates between 1) matching the extracted theorems against the
proofs in the library and 2) replacing the matched proportion of the proofs with the application of the
new theorems (See more details in the Appendix). We call this procedure the refactoring procedure
and the resulting shortened proof the refactored proof. We want to highlight that compression is
only one of the downstream tasks we used to evaluate the usefulness of our extracted theorems. One
-----
Table 5: Theorem usage and their contribution to refactoring
# Theorems Used Total Usage Average Usage Max Usage Average Number of Nodes Saved Total Number of Nodes Saved
Expanded 670 147640 77.4 60705 196.7 375126
Original 14 11736 733.5 8594 2025.8 32413
Total 684 159376 82.9 60705 211.9 407539
may pursue a compression objective for this purpose, to find the most frequently appeared fragments
across all proofs. Our single-proof prediction approach puts its main focus on human preferences and
could potentially be combined with compression as future work.
With the 16 new extracted theorems from the original dataset, the new library obtained from refactoring was indeed smaller (See Table 5). These new theorems on average saved 2025.8 nodes which
is an order of magnitude more than those from the expanded dataset (196.7 nodes). Nevertheless,
this shows that extracted theorems from both expanded and human datasets are frequently used in
refactoring the theorem library. In total, we were able to refactor 14092 out of 27220 theorems in the
MetaMath database. This improvement in compression is striking, as REFACTOR didn’t explicitly
consider compression as an objective.
5.6 Q3B - ARE NEWLY EXTRACTED THEOREMS USEFUL FOR THEOREM PROVING?
We further demonstrated the usefulness of our new theorems with an off-the-shelf neural network
theorem prover, Holophrasm (Whalen, 2016). We trained two Holophrasm provers, one with the
original dataset, and the other with the dataset augmented with the newly extracted and refactored
proofs.
We evaluated the proof success rate in Table 4. We used the default values for all hyperparameters of
the prover, and we evaluated proof success rates on a hold-out suit of test theorems. We report the
results with the time limit of each proof search set to 1 and 5 minutes. Compared to the reported result
in Whalen (2016) under a 5-minute limit, our re-implementation was able to obtain a slightly higher
success rate (15.1%). It can be seen that by training on the refactored dataset, the prover’s proof
success rate improved relatively by 14-30% under 1 and 5 min limits, demonstrating the usefulness
of REFACTOR in theorem proving.
To investigate how newly extracted theorems contributed to the improvement, we calculated the
percentage of proved theorem that used new theorem at least once in its proof, i.e. new theorem usage
as shown in Table 4. The usage for 1 and 5 min cases are 42.3% and 43.6% respectively, indicating
newly extracted theorems were used very frequently by the prover. More remarkably, the newly
extracted theorems used in proving test theorems did not concentrate on few theorems as one might
predict. Instead, there was a diverse set of newly extracted theorems that were useful in theorem
proving: for the 5 min setting, there were in total 141 unique new theorems used for proving test
theorems, and the most frequently used one was used 17 times (see more details in Appendix B.2).
6 CONCLUSION
In this paper, we study the problem of extracting useful theorems from mathematical proofs in the
Metamath framework. As proofs are represented as proof trees in formal systems, we formalize
theorem extraction as a node-level binary classification problem on proof trees. We propose one way
to create datasets for the problem and additionally develop an algorithm to verify the validity of the
prediction. We demonstrate that our best graph neural network model was able to extract unseen
human theorems 19.6% of the time. When the model’s prediction did not match the human theorem
ground truth, we can additionally extract 1907 theorems from the dataset. We further applied the
model on the existing Metamath library and found it was able to extract 16 new theorems, each was
used 733.5 times on average in the entire Metamath database. After theorem refactoring, those 16
new theorems saved 32413 proof nodes of the entire dataset. Finally, by training the refactored proofs,
we show a prover achieved better proof success rate on test theorems.
Our work represents the first proof-of-concept of theorem extraction using neural network models.
We see there are various ways to improve the existing model, such as scaling up the model size, or
using more powerful architectures such as transformers to autoregressively predict the target, all
of which are left to future works. Lastly, we would like to note that our methodology is not only
generic for formal mathematical theorem extraction, but also has the potential to be applied to other
applications, such as code refactoring.
-----
7 ETHICS STATEMENT
We do not foresee any negative ethical and societal impacts for our project.
8 REPRODUCIBILITY STATEMENT
Full data and code for all experiments will be released with the final version of this draft. We have
provided code for theorem expansion and theorem verification algorithms along with a subset of our
data in the supplementary materials.
-----
REFERENCES
Jacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. 2016
_IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 39–48, 2015._
Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart Wilcox. Holist: An
environment for machine learning of higher order logic theorem proving. In Kamalika Chaudhuri
and Ruslan Salakhutdinov (eds.), Proceedings of the 36th International Conference on Machine
_Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings_
_[of Machine Learning Research, pp. 454–463. PMLR, 2019a. URL http://proceedings.](http://proceedings.mlr.press/v97/bansal19a.html)_
[mlr.press/v97/bansal19a.html.](http://proceedings.mlr.press/v97/bansal19a.html)
Kshitij Bansal, Christian Szegedy, Markus N. Rabe, Sarah M. Loos, and Viktor Toman. Learning to
Reason in Large Theories without Imitation. arXiv preprint arXiv:1905.10501, 2019b.
Bruno Barras, Samuel Boutin, Cristina Cornes, Judicael Courant, Yann Coscoy, David Delahaye,¨
Daniel de Rauglaudre, Jean-Christophe Filliatre, Eduardo Gimˆ enez, Hugo Herbelin, et al. The´
Coq proof assistant reference manual. INRIA, version, 6(11), 1999.
Michael Chang, Abhishek Gupta, Sergey Levine, and Thomas L. Griffiths. Automatically composing
representation transformations as a means for generalization. In 7th International Conference on
_Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net,_
[2019. URL https://openreview.net/forum?id=B1ffQnRcKX.](https://openreview.net/forum?id=B1ffQnRcKX)
Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, and Jakob von Raumer. The
Lean theorem prover (system description). In International Conference on Automated Deduction,
pp. 378–388. Springer, 2015.
Eyal Dechter, Jonathan Malmaud, Ryan P. Adams, and Joshua B. Tenenbaum. Bootstrap learning via modular concept discovery. In Francesca Rossi (ed.), IJCAI 2013, Proceedings of the
_23rd International Joint Conference on Artificial Intelligence, Beijing, China, August 3-9, 2013,_
pp. 1302–1309. IJCAI/AAAI, 2013. [URL http://www.aaai.org/ocs/index.php/](http://www.aaai.org/ocs/index.php/IJCAI/IJCAI13/paper/view/6890)
[IJCAI/IJCAI13/paper/view/6890.](http://www.aaai.org/ocs/index.php/IJCAI/IJCAI13/paper/view/6890)
Kevin Ellis, Lucas Morales, Mathias Sable-Meyer, Armando Solar-Lezama, and Josh Tenenbaum.´
Learning libraries of subroutines for neurally-guided bayesian program induction. In Samy Bengio,
Hanna M. Wallach, Hugo Larochelle, Kristen Grauman, Nicolo Cesa-Bianchi, and Roman Garnett`
(eds.), Advances in Neural Information Processing Systems 31: Annual Conference on Neural
_Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montreal, Canada´_, pp.
[7816–7826, 2018. URL https://proceedings.neurips.cc/paper/2018/hash/](https://proceedings.neurips.cc/paper/2018/hash/7aa685b3b1dc1d6780bf36f7340078c9-Abstract.html)
[7aa685b3b1dc1d6780bf36f7340078c9-Abstract.html.](https://proceedings.neurips.cc/paper/2018/hash/7aa685b3b1dc1d6780bf36f7340078c9-Abstract.html)
Kevin Ellis, Catherine Wong, Maxwell I. Nye, Mathias Sable-Meyer, Luc Cary, Lucas Morales,´
Luke B. Hewitt, Armando Solar-Lezama, and Joshua B. Tenenbaum. Dreamcoder: Growing
generalizable, interpretable knowledge with wake-sleep bayesian program learning. _CoRR,_
[abs/2006.08381, 2020. URL https://arxiv.org/abs/2006.08381.](https://arxiv.org/abs/2006.08381)
Alexander L. Gaunt, Marc Brockschmidt, Nate Kushman, and Daniel Tarlow. Differentiable programs
with neural libraries. In ICML, 2017.
Thibault Gauthier and Cezary Kaliszyk. Sharing HOL4 and HOL light proof knowledge. In Martin
Davis, Ansgar Fehnker, Annabelle McIver, and Andrei Voronkov (eds.), Logic for Programming,
_Artificial Intelligence, and Reasoning - 20th International Conference, LPAR-20 2015, Suva, Fiji,_
_November 24-28, 2015, Proceedings, volume 9450 of Lecture Notes in Computer Science, pp._
[372–386. Springer, 2015. doi: 10.1007/978-3-662-48899-7\ 26. URL https://doi.org/](https://doi.org/10.1007/978-3-662-48899-7_26)
[10.1007/978-3-662-48899-7_26.](https://doi.org/10.1007/978-3-662-48899-7_26)
Thibault Gauthier, Cezary Kaliszyk, and Josef Urban. Initial experiments with statistical conjecturing
over large formal corpora. In Andrea Kohlhase, Paul Libbrecht, Bruce R. Miller, Adam Naumowicz,
Walther Neuper, Pedro Quaresma, Frank Wm. Tompa, and Martin Suda (eds.), Joint Proceedings
_of the FM4M, MathUI, and ThEdu Workshops, Doctoral Program, and Work in Progress at the_
_Conference on Intelligent Computer Mathematics 2016 co-located with the 9th Conference on_
-----
_Intelligent Computer Mathematics (CICM 2016), Bialystok, Poland, July 25-29, 2016, volume 1785_
[of CEUR Workshop Proceedings, pp. 219–228. CEUR-WS.org, 2016. URL http://ceur-ws.](http://ceur-ws.org/Vol-1785/W23.pdf)
[org/Vol-1785/W23.pdf.](http://ceur-ws.org/Vol-1785/W23.pdf)
Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and Michael Norrish. Learning
[to prove with tactics. CoRR, abs/1804.00596, 2018. URL http://arxiv.org/abs/1804.](http://arxiv.org/abs/1804.00596)
[00596.](http://arxiv.org/abs/1804.00596)
Christopher Hahn, Frederik Schmitt, Jens U. Kreber, Markus N. Rabe, and Bernd Finkbeiner.
Transformers Generalize to the Semantics of Logics. arXiv preprint arXiv:2003.04218, 2020.
William L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs.
_arXiv preprint arXiv:1706.02216, 2017._
John Harrison. HOL Light: A tutorial introduction. In International Conference on Formal Methods
_in Computer-Aided Design, pp. 265–269. Springer, 1996._
Stefan Hetzl, Alexander Leitsch, and Daniel Weller. Towards algorithmic cut-introduction. In
_International Conference on Logic for Programming Artificial Intelligence and Reasoning, pp._
228–242. Springer, 2012.
Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. GamePad: A learning environment
for theorem proving. In 7th International Conference on Learning Representations, ICLR 2019,
_[New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019. URL https://openreview.](https://openreview.net/forum?id=r1xwKoR9Y7)_
[net/forum?id=r1xwKoR9Y7.](https://openreview.net/forum?id=r1xwKoR9Y7)
Drew A Hudson and Christopher D Manning. Compositional attention networks for machine
reasoning. In International Conference on Learning Representations (ICLR), 2018.
Cezary Kaliszyk and Josef Urban. Learning-assisted theorem proving with millions of lemmas. J.
_[Symb. Comput., 69:109–128, 2015. doi: 10.1016/j.jsc.2014.09.032. URL https://doi.org/](https://doi.org/10.1016/j.jsc.2014.09.032)_
[10.1016/j.jsc.2014.09.032.](https://doi.org/10.1016/j.jsc.2014.09.032)
Cezary Kaliszyk, Josef Urban, and Jir´ı Vyskocil. Lemmatization for stronger reasoning in large
theories. In Carsten Lutz and Silvio Ranise (eds.), Frontiers of Combining Systems - 10th In_ternational Symposium, FroCoS 2015, Wroclaw, Poland, September 21-24, 2015. Proceedings,_
volume 9322 of Lecture Notes in Computer Science, pp. 341–356. Springer, 2015. doi: 10.1007/
[978-3-319-24246-0\ 21. URL https://doi.org/10.1007/978-3-319-24246-0_](https://doi.org/10.1007/978-3-319-24246-0_21)
[21.](https://doi.org/10.1007/978-3-319-24246-0_21)
Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua
Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR
_[2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http:](http://arxiv.org/abs/1412.6980)_
[//arxiv.org/abs/1412.6980.](http://arxiv.org/abs/1412.6980)
Guillaume Lample and Franc¸ois Charton. Deep learning for symbolic mathematics. In 8th
_International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia,_
_[April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=](https://openreview.net/forum?id=Ske31kBtPr)_
[Ske31kBtPr.](https://openreview.net/forum?id=Ske31kBtPr)
Wenda Li, Lei Yu, Yuhuai Wu, and Lawrence C. Paulson. Isarstep: a benchmark for high-level
mathematical reasoning. In International Conference on Learning Representations, 2021. URL
[https://openreview.net/forum?id=Pzj6fzU6wkj.](https://openreview.net/forum?id=Pzj6fzU6wkj)
Jiayuan Mao, Chuang Gan, Pushmeet Kohli, Joshua B. Tenenbaum, and Jiajun Wu. The neurosymbolic concept learner: Interpreting scenes, words, and sentences from natural supervision. In
_[International Conference on Learning Representations, 2019. URL https://openreview.](https://openreview.net/forum?id=rJgMlhRctm)_
[net/forum?id=rJgMlhRctm.](https://openreview.net/forum?id=rJgMlhRctm)
Bartosz Piotrowski and Josef Urban. Guiding Inferences in Connection Tableau by Recurrent Neural
Networks. In Christoph Benzmuller and Bruce Miller (eds.),¨ _Intelligent Computer Mathematics,_
pp. 309–314, Cham, 2020. Springer International Publishing. ISBN 978-3-030-53518-6.
-----
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_[CoRR, abs/2009.03393, 2020. URL https://arxiv.org/abs/2009.03393.](https://arxiv.org/abs/2009.03393)_
Markus N Rabe, Dennis Lee, Kshitij Bansal, and Christian Szegedy. Mathematical reasoning via
self-supervised skip-tree training. arXiv preprint arXiv:2006.04757, 2020.
Josef Urban and Jan Jakubuv. First Neural Conjecturing Datasets and Experiments. In Christoph˚
Benzmuller and Bruce Miller (eds.),¨ _Intelligent Computer Mathematics, pp. 315–323, Cham, 2020._
Springer International Publishing. ISBN 978-3-030-53518-6.
Jiˇr´ı Vyskocil, David Stanovskˇ y, and Josef Urban. Automated proof compression by invention of`
new definitions. In International Conference on Logic for Programming Artificial Intelligence and
_Reasoning, pp. 447–462. Springer, 2010._
Qingxiang Wang, Chad Brown, Cezary Kaliszyk, and Josef Urban. Exploration of neural machine translation in autoformalization of mathematics in mizar. Proceedings of ACM SIGPLAN
_International Conference on Certified Programs and Proofs, 2020._
Daniel Whalen. Holophrasm: a neural automated theorem prover for higher-order logic, 2016.
Yuhuai Wu, Honghua Dong, Roger B. Grosse, and Jimmy Ba. The scattering compositional learner:
Discovering objects, attributes, relationships in analogical reasoning. CoRR, abs/2007.04212, 2020.
[URL https://arxiv.org/abs/2007.04212.](https://arxiv.org/abs/2007.04212)
Yuhuai Wu, Albert Jiang, Jimmy Ba, and Roger Grosse. INT: An Inequality Benchmark for Evaluating
Generalization in Theorem Proving. In International Conference on Learning Representations,
[2021. URL https://openreview.net/forum?id=O6LPudowNQm.](https://openreview.net/forum?id=O6LPudowNQm)
Kaiyu Yang and Jia Deng. Learning to Prove Theorems via Interacting with Proof Assistants. In
_Proceedings of International Conference on Machine Learning (ICML), 2019._
-----
A FURTHER EXPLANATIONS OF THE ALGORITHMS
A.1 THEOREM EXPANSION
We discuss our theorem expansion algorithm in this section. An overview of the algorithm can be
found in Algorithm 1. The algorithm takes input of two proof trees where the first proof tree uses the
theorem that the second proof tree shows as one of the steps.
We explain our algorithm with the example from Figure 1. Specifically, proof tree T corresponds to
Figure 1 (b) and proof tree Ts corresponds to Figure 1 (a). The theorem we want to expand is a1i and
we first obtain all its arguments using GetArguments function. We treat each theorem as a function
and its arguments are the hypothesis of the theorem used to compute the conclusion. Consequently,
the nominal arguments are wph, wps and a1i.1. Next, we obtain contextual arguments, which are
those specific hypotheses used in the context of the proof. Each hypothesis are represented by the
entire subtree above each parent of c. Concretely, the contextual arguments of the a1i node in (b)
are wps, wch and [wph, wps, mp1i.a, mp1i.b, ax-mp]. Here, we use square bracket to enclose
a subtree that has more than one node, which is treated holistically as the third contextual argument.
Note that we can clearly see a one-to-one correspondence between the nominal arguments and
the contextual arguments: (wph→wps, wps→wch and a1i.1→[wph, wps, mp1i.a, mp1i.b,
ax-mp]). We then simply replace all nodes in the proof tree of a1i using this mapping. This gives
us [wps, wch, wps, wi, wph, wps, mp1i.a, mp1i.b, ax-mp, wps, wch, ax-1, ax-mp]. We
generate its proof tree representation with GetProof function. Finally we replace the subtree above
a1i with the new proof tree which in this case happens to be the entire proof of mp1i and this leads
to the final expanded proof in Figure 1 (c).
**Algorithm 1 Theorem Expansion Algorithm Pseudocode**
1: procedure EXPANSION
2: **Input: proof tree T that uses theorem s at node c.**
3: **Input: proof tree of theorem s: Ts.**
4: nominalArguments = GetArguments(Ts)
5: contextualArguments = [GetSubtree(p) for p in GetParents(c)]
6: allNodeNames = GetAllNodeNames(Ts)
7: _f : nominalArguments →_ contextualArguments.
8: _f_ (i[th] element of nominalArguments) ≜ _i[th]_ element of contextualArguments
9: **for each name N ∈** allNodeNames do
10: **if N ∈** nominalArguments then
11: replace N with f (N )
12: replacedProof = GetProof(allNodeNames)
13: replace entire subtree above node c with replacedProof
14: **return T**
A.2 THEOREM VERIFICATION
N: wph N: wps N: wch N: wph N: wps N: wch
PROP: wffph PROP: wffps PROP: wffch PROP: wffph PROP: wffps PROP: wffch
N: wph N: wps N: wch N: w3a N: wth N: pm3.2an3 N: 3exp.1
PROP: wffph PROP: wffps PROP: wffch PROP: wff(ph/ps/ch) PROP: wffth PROP: |-(ph->(ps->(ch->(ph/ps/ch)))) PROP: |-((ph/ps/ch)->th)
N: wph N: wps N: wch N: wth N: syl8
PROP: wffph PROP: wffps PROP: wffch PROP: wffth PROP: |-(ph->(ps->(ch->th)))
N: imp31
PROP: |-(((ph/ps)/ch)->th)
Figure 3: A proof tree prediction where nodes with output probability greater than 0.5 have been
colored blue. This proof tree does not satisfy the constraint to be a valid theorem because only one of
the parent nodes of the root are predicted to be in _target._
_V_
In this section, we present our algorithm to determine whether a predicted component made by
REFACTOR constitutes a valid theorem. On a high level, our algorithm checks two necessary
-----
N: wps N: wph
PROP: wffps PROP: wffph
N: wph N: wa N: wph N: wps N: bianfi.1
PROP: wffph PROP: wff(ps/ph) PROP: wffph PROP: wffps PROP: |--.ph
N: wps N: wph N: wn N: wn N: bianfi.1 N: intnan
PROP: wffps PROP: wffph PROP: wff-.ph PROP: wff-.(ps/ph) PROP: |--.ph PROP: |--.(ps/ph)
N: wph N: wa N: 2th
PROP: wffph PROP: wff(ps/ph) PROP: |-(-.ph<->-.(ps/ph))
N: con4bii
PROP: |-(ph<->(ps/ph))
N: wph N: wa
PROP: wffph PROP: wff(ps/ph)
N: wn N: wn N: bianfi.1 N: intnan
PROP: wff-.ph PROP: wff-.(ps/ph) PROP: |--.ph PROP: |--.(ps/ph)
N: wph N: wa N: 2th
PROP: wffph PROP: wff(ps/ph) PROP: |-(-.ph<->-.(ps/ph))
N: con4bii
PROP: |-(ph<->(ps/ph))
(a) A prediction made by REFACTOR with _V[ˆ]target in_
blue.
N: wph N: wps
PROP: wffph PROP: wffps
N: wn N: wn N: hyp.1 N: hyp.2
PROP: wff-.ph PROP: wff-.(ps/ph) PROP: |--.ph PROP: |--.(ps/ph)
N: wph N: wps N: 2th
PROP: wffph PROP: wffps PROP: |-(-.ph<->-.(ps/ph))
N: con4bii
PROP: |-(ph<->(ps/ph))
(c) _V[ˆ]target extracted as in (b) with leaf node name and_
proposition replaced.
(b) _V[ˆ]target extracted from (a)._
N: wph N: wps
PROP: wffph PROP: wffps
N: wn N: wn N: hyp.1 N: hyp.2
PROP: wff-.ph PROP: wff-.ps PROP: |--.ph PROP: |--.ps
N: wph N: wps N: 2th
PROP: wffph PROP: wffps PROP: |-(-.ph<->-.ps)
N: con4bii
PROP: |-(ph<->ps)
(d) A valid proof tree extracted and verified.
Figure 4: Visualization of theorem verification algorithm.
conditions and performs standardization before feeding the node names of extracted component to a
verifier which we describe next.
We describe how we can verify Metamath proofs represented by a conclusion and a list of node names
such as the ones seen in the previous section. This can be easily achieved by calling GetProof
from Algorithm 1 on the list of nodes names which follow a Reverse Polish Notation (RPN), and
the function call returns a proof tree labelled with propositions (i.e., PROP) . We then compare
between the proposition given in the bottom node (conclusion) to the given conclusion specified by
the theorem. The proof is verified if and only if the two conclusions are the same. We refer to this
simple procedure as Metamath verifier.
For the theorem verification algorithm, We first take all node prediction with value greater than 0.5 as
the set of extraction nodes, which we represent as _V[ˆ]target (see Figure 4 (a) and (b)). We first check if_
ˆ
_Vtarget forms a connected component i.e. a tree structure, as disjoint set of nodes cannot be a valid_
new theorem. Secondly, one necessary constraint for a valid extracted theorem is that for each node
in [ˆ]target, either none or all of its parent nodes need to be present in [ˆ]target. If only some but not
_V_ _V_
all parents are present, this corresponds to a step of theorem application with an incorrect number
of arguments. We illustrate one example that violates this constraint in Figure 3. As seen in this
example, only one parent of the root node is in _V[ˆ]target and similarly one parent node of syl8 is not_
in [ˆ]target. Because of these missing arguments, this will not be a valid new theorem. We note that
_V_
although the extraction algorithm can be implemented in a way such that it ”auto-completes” the
arguments by adding additional necessary nodes into the set of extracted nodes, we choose not to do
so in order to make sure the submodule is entirely identified by REFACTOR.
Once the extracted nodes pass these checks, we perform a so-called standardization. Here we once
again leverage functions defined in Algorithm 1. Specifically, we replace all node names of leaf nodes
with a pre-defined set of node names allowed in Metamath such as wph, wps. This can be achieved
by first obtaining arguments of the extracted component via GetArguments and replacing these
arguments in a fashion similar to Algorithm 1 except this time the nominal arguments are from the
extracted component and contextual arguments will be the pre-defined arguments from Metamath
convention. As seen in Figure 4 (c), we replace all leaf node names wa with wps.
After standardization, we simply feed all the node names of the extracted component into the verifier
we have described to determine whether it is a valid theorem. For example, node names in (c) [wph,
wps, wph, wn, wps, wn, hyp.1, hyp.2, 2th, con4bii] are fed into the verifier and we arrive at
Figure 4 (d).
-----
N: wps N: wch N: wch N: wth N: wps N: imim12i.2
PROP: wffps PROP: wffch PROP: wffch PROP: wffth PROP: wffps PROP: |-(ch->th)
N: wps N: wch N: wph N: wps N: wi N: wth N: imim12i.1 N: imim2i
PROP: wffps PROP: wffch PROP: wffph PROP: wffps PROP: wff(ps->ch) PROP: wffth PROP: |-(ph->ps) PROP: |-((ps->ch)->(ps->th))
N: wph N: wi N: wth N: syl5com
PROP: wffph PROP: wff(ps->ch) PROP: wffth PROP: |-(ph->((ps->ch)->th))
N: com12
PROP: |-((ps->ch)->(ph->th))
Figure 5: An example prediction that fails to be extracted as a new theorem due to no valid substitution
plan in standardization. Specifically, the blue node wi cannot be substituted to a basic argument
allowed in Metamath while still keeping the proof tree valid.
Intuitively, this standardization process can be thought of as an reverse process of the steps performed
in proof expansion algorithm. Instead of replacing simple and basic nominal arguments with complex
contextual ones, we use pre-defined simple contextual arguments from Metamath to replace the
complex nodes in the extracted proof tree. We note that verifying a proof after standardization is
not always possible. Consider an example in Figure 5 where the two parent nodes of blue node wi
are not included in _V[ˆ]target but in fact included in Vtarget. Because of this, we need to replace wi_
with a basic argument in Metamath such as wta. However, with this replacement, the arguments of
syl5com will no longer be valid because it needs an expression with two wff variables in the node
we substituted. Therefore, there will be no valid substitution and this proof tree prediction cannot
be extracted as a new theorem. We discard the extracted components that cannot be verified after
standardization and only consider the ones that can be verified as new theorems.
A.3 THEOREM REFACTORING
In this section, we describe how we use newly extracted theorems to refactor the proof database of
Metamath. Before proceeding, we first introduce how a basic refactor subroutine works. Consider
the proof of imim2i in Figure 6 (a) and a new theorem extracted by REFACTOR in (b). The
blue nodes in (a) can be refactored by the new theorem in (b) because their steps (wi and a1i) are
the same. We can substitute arguments in (b) (wffph, wffps, wffch, and |-(ph→ps)) with
arguments of blue nodes in (a) (wffph, wffps, wffch and |-(ph→ps)) respectively. After
performing the substitution, we can replace all blue nodes in (a) with a single theorem application
step of new theorem along with its arguments. The refactored proof tree of imim2i is shown in
Figure 6 (c).
We provide an overview of the refactoring algorithm in Algorithm 2. The algorithm aims to repeatedly
perform the aforementioned refactor subroutine on each node of proof trees of theorems with each
new extracted theorem until no further subroutine can be performed. Our implementation refactors
each proof tree in a post order traversal i.e. the leaves are attempted to be refactored first than the
root and this traversal is repeated when a refactor subroutine has been performed by using a while
loop. This is because once a theorem has been refactored, new theorems that are previously unable to
refactor it might be applicable again. Different traversal order and which new theorem to refactor
with first can potentially lead to different refactoring results and we leave this as future work.
-----
**Algorithm 2 Refactoring Algorithm Pseudocode**
1: procedure REFACTORING
2: Proof trees {p[(1)], p[(2)], · · ·, p[(][m][)]} of theorems (to be refactored) in set.mm.
3: Proof trees {q[(1)], q[(2)], · · ·, q[(][n][)]} of extracted new theorems.
4: Generate post order node traversal {TR[(1)], TR[(2)], · · ·, TR[(][m][)]} for each proof tree in
_{p[(1)], p[(2)], · · ·, p[(][m][)]}._
5: **for i ∈** 1, 2, · · ·, m do
6: **for j ∈** 1, 2, · · ·, n do
7: **while True do**
8: **for all node ∈** _TR[i]_ **do**
9: match = RefactorSubroutine(node, q[j])
10: **if match == True then**
11: Update post order node traversal TR[i]
12: **goto Line 7**
13: **break**
**return refactored proof trees** _p[(1)]ref_ _[, p][(2)]ref_ _[,][ · · ·][, p][(]ref[m][)]_
_{_ _[}]_
14: procedure REFACTORSUBROUTINE
15: **Input: proof tree with root node p that is matched against q.**
16: **Input: proof tree q, an extracted new theorem.**
17: pSteps = GetSteps(p)
18: qSteps = GetSteps(q)
19: **if pSteps != qSteps then**
20: **return False**
21: **else**
22: pArguments = GetArguments(p)
23: qArguments = GetArguments(q)
24: _f : qArguments →_ pArguments.
25: _f_ (i[th] element of qArguments) ≜ _i[th]_ element of pArguments
26: refactoredTheorem = GetTheoremApplication(q)
27: replace node p with refactoredTheorem
28: **return True**
-----
N: wph N: wps
PROP: wffph PROP: wffps
N: wi N: wch N: new_theorem.1
PROP: wff(ph->ps) PROP: wffch PROP: |-(ph->ps)
N: a1i
PROP: |-(ch->(ph->ps))
(b) A new theorem extracted by REFACTOR
N: wph N: wps
PROP: wffph PROP: wffps
N: wi N: wch N: imim2i.1
PROP: wff(ph->ps) PROP: wffch PROP: |-(ph->ps)
N: wch N: wph N: wps N: a1i
PROP: wffch PROP: wffph PROP: wffps PROP: |-(ch->(ph->ps))
N: a2i
PROP: |-((ch->ph)->(ch->ps))
(a) Proof tree of theorem imim2i from set.mm. The blue nodesand can be used to refactor blue nodes in (a).
can be refactored by the new theorem in (b).
N: wph N: wps N: wch N: imim2i.1
PROP: wffph PROP: wffps PROP: wffch PROP: |-(ph->ps)
N: wch N: wph N: wps N: new_theorem
PROP: wffch PROP: wffph PROP: wffps PROP: |-(ch->(ph->ps))
N: a2i
PROP: |-((ch->ph)->(ch->ps))
(c) Refactored proof tree of imim2i with new theorem highlighted in blue.
Figure 6: Visualization of a single refactoring operation. The theorem imim2i to be refactored is
shown in (a), the new theorem used for refactoring is shown in (b) and imim2i after refactoring is
shown in (c).
-----
B EXTRACTED THEOREMS
B.1 FREQUENTLY USED THEOREMS IN REFACTORING
In Figure 7, we show the top 10 most frequently used new theorems in refactoring. Among them,
two are extracted from the original set.mm and the rest are extracted from the expanded dataset.
It is worth noting that although these theorems generally have fewer than 10 nodes each, they in
total contribute to more than 78% of total number of nodes saved in refactoring, suggesting the
pervasiveness and reusability of these extracted theorems in set.mm.
N: cA N: cB
PROP: classA PROP: classB
N: wcel N: wph N: hyp.1
PROP: wffAe.B PROP: wffph PROP: |-Ae.B
N: a1i
PROP: |-(ph->Ae.B)
N: cA N: cB
PROP: classA PROP: classB
N: wph N: wceq
PROP: wffph PROP: wffA=B
N: wa
PROP: wff(ph/A=B)
N: cA N: cB
PROP: classA PROP: classB
N: wph N: wps N: wceq N: hyp.1 N: hyp.2
PROP: wffph PROP: wffps PROP: wffA=B PROP: |-(ph->ps) PROP: |-(ps->A=B)
N: syl
PROP: |-(ph->A=B)
(a) Used 60705 times, from (b) Used 11375 times, from exexpanded dataset panded dataset
(c) Used 11125 times, from expanded dataset
N: wph
PROP: wffph
N: wn N: wps
PROP: wff-.ph PROP: wffps
N: wa
PROP: wff(-.ph/ps)
N: wch N: wth
PROP: wffch PROP: wffth
N: wph N: wps N: wa N: hyp.1
PROP: wffph PROP: wffps PROP: wff(ch/th) PROP: |-(ph->ps)
N: adantr
PROP: |-((ph/(ch/th))->ps)
(d) Used 8594 times, from
set.mm
(e) Used 7241 times, from expanded dataset
N: wch N: wth
PROP: wffch PROP: wffth
N: wph N: wps N: wb N: hyp.1 N: hyp.2
PROP: wffph PROP: wffps PROP: wff(ch<->th) PROP: |-(ph->ps) PROP: |-(ps->(ch<->th))
N: syl
PROP: |-(ph->(ch<->th))
(f) Used 4693 times, from expanded dataset
N: wth N: wta
PROP: wffth PROP: wffta
N: wph N: wps N: wch N: wb N: hyp.1 N: hyp.2 N: hyp.3
PROP: wffph PROP: wffps PROP: wffch PROP: wff(th<->ta) PROP: |-(ph->ps) PROP: |-(ph->ch) PROP: |-((ps/ch)->(th<->ta))
N: syl2anc
PROP: |-(ph->(th<->ta))
(g) Used 4437 times, from expanded dataset
N: cA N: cB N: cC
PROP: classA PROP: classB PROP: classC
N: wph N: wbr N: wps N: hyp.1 N: hyp.2
PROP: wffph PROP: wffACB PROP: wffps PROP: |-(ph->ACB) PROP: |-(ph->(ACB<->ps))
N: mpbid
PROP: |-(ph->ps)
(h) Used 3428 times, from expanded dataset
N: wph N: wth N: wps N: hyp.1 N: hyp.2
PROP: wffph PROP: wffth PROP: wffps PROP: |-(ph->th) PROP: |-(th->ps)
N: wph N: wps N: wch N: syl
PROP: wffph PROP: wffps PROP: wffch PROP: |-(ph->ps)
N: adantr
PROP: |-((ph/ch)->ps)
N: cA N: cr N: cB N: cr
PROP: classA PROP: classRR PROP: classB PROP: classRR
N: wcel N: wcel N: wph
PROP: wffAe.RR PROP: wffBe.RR PROP: wffph
N: w3a
PROP: wff(Ae.RR/Be.RR/ph)
(i) Used 3376 times, from expanded dataset
(j) Used 2933 times, from set.mm
Figure 7: Top 10 most frequently used theorems in refactoring.
-----
B.2 FREQUENTLY USED THEOREMS IN THEOREM PROVING
In Figure 8, we show the top 10 most frequently used new theorems in theorem proving. All of them
are extracted from the expanded dataset. It can be seen that the top 5 mostly used new theorems have
fewer nodes than the other 5, suggesting these shorter new theorems are less proof specific and hence
are used more frequently than those that are much longer and more applicable in niche proof context.
Interestingly, there are two newly extracted theorems that show up in both Figure 7 and 8. The first
one appears in both Figure 7 (b) and Figure 8 (c) and the second one appears in both Figure 7 (c)
and Figure 8 (d). This overlap between frequently used theorems in refactoring and theorem proving
further demonstrates the diverse utility of theorems we extracted.
-----
N: cA N: cB
PROP: classA PROP: classB
N: wph N: wceq N: hyp.1 N: hyp.2
PROP: wffph PROP: wffA=B PROP: |-ph PROP: |-(ph->A=B)
N: ax-mp
PROP: |-A=B
N: cA N: cB
PROP: classA PROP: classB
N: wcel N: wph N: hyp.1
PROP: wffAe.B PROP: wffph PROP: |-Ae.B
N: a1i
PROP: |-(ph->Ae.B)
N: cA N: cB
PROP: classA PROP: classB
N: wph N: wcel N: hyp.1
PROP: wffph PROP: wffAe.B PROP: |-ph
N: a1i
PROP: |-(Ae.B->ph)
(b) Used 16 times, from expanded
dataset
(c) Used 9 times, from expanded
dataset
(a) Used 17 times, from expanded
dataset
N: cA N: cB
PROP: classA PROP: classB
N: wceq N: wph N: hyp.1
PROP: wffA=B PROP: wffph PROP: |-A=B
N: a1i
PROP: |-(ph->A=B)
N: cA N: cB
PROP: classA PROP: classB
N: wph N: wps N: wceq N: hyp.1 N: hyp.2
PROP: wffph PROP: wffps PROP: wffA=B PROP: |-(ph->ps) PROP: |-(ps->A=B)
N: syl
PROP: |-(ph->A=B)
(e) Used 5 times, from expanded dataset
(d) Used 6 times, from expanded dataset
N: wph N: wch N: wth N: hyp.2
PROP: wffph PROP: wffch PROP: wffth PROP: |-(ph->(ch<->th))
N: wph N: wps N: wth N: wch N: hyp.1 N: biimprd
PROP: wffph PROP: wffps PROP: wffth PROP: wffch PROP: |-(ph->(ps->th)) PROP: |-(ph->(th->ch))
N: wph N: wps N: wch N: syld
PROP: wffph PROP: wffps PROP: wffch PROP: |-(ph->(ps->ch))
N: imp
PROP: |-((ph/ps)->ch)
(f) Used 4 times, from expanded dataset
N: wph N: wth N: wps N: hyp.1 N: hyp.2
PROP: wffph PROP: wffth PROP: wffps PROP: |-th PROP: |-(ph->(th->ps))
N: wph N: wps N: wch N: mpi N: hyp.3
PROP: wffph PROP: wffps PROP: wffch PROP: |-(ph->ps) PROP: |-((ph/ps)->ch)
N: mpdan
PROP: |-(ph->ch)
N: wph N: wps N: wch N: hyp.1
PROP: wffph PROP: wffps PROP: wffch PROP: |-(ph->(ps<->ch))
N: wph N: wps N: wch N: biimpd
PROP: wffph PROP: wffps PROP: wffch PROP: |-(ph->(ps->ch))
N: a2i
PROP: |-((ph->ps)->(ph->ch))
(h) Used 3 times, from expanded dataset
(g) Used 4 times, from expanded dataset
N: cA N: cB N: cB N: cC N: cA N: cvv N: cA N: cB N: cC
PROP: classA PROP: classB PROP: classB PROP: classC PROP: classA PROP: class_V PROP: classA PROP: classB PROP: classC
N: wss N: wcel N: wcel N: hyp.1 N: ssexg
PROP: wffAC_B PROP: wffBe.C PROP: wffAe._V PROP: |-AC_B PROP: |-((AC_B/Be.C)->Ae._V)
N: mpan
PROP: |-(Be.C->Ae._V)
(i) Used 3 times, from expanded dataset
N: wch N: wth
PROP: wffch PROP: wffth
N: wps N: wch N: wth N: wps N: wa N: wps N: wch N: wth
PROP: wffps PROP: wffch PROP: wffth PROP: wffps PROP: wff(ch/th) PROP: wffps PROP: wffch PROP: wffth
N: wph N: w3a N: wa N: hyp.1 N: 3anass
PROP: wffph PROP: wff(ps/ch/th) PROP: wff(ps/(ch/th)) PROP: |-(ph<->(ps/ch/th)) PROP: |-((ps/ch/th)<->(ps/(ch/th)))
N: bitri
PROP: |-(ph<->(ps/(ch/th)))
(j) Used 3 times, from expanded dataset
Figure 8: Top 10 most frequently used theorems in theorem proving.
-----
| [
"Yuhuai, Wu",
"Jin Peng, Zhou",
"Qiyang, Li",
"Roger Baker, Grosse"
] | 2021-01-01T00:00:00 | ICLR 2024 | true | 1 | 0 | [
"MetaMath"
] | https://arxiv.org/abs/2402.17032 | https://arxiv.org/abs/2402.17032 | https://www.semanticscholar.org/paper/5e7999443d37916269db5ff758587335335ff75d |
Reasoning with Large Language Models, a Survey | Scaling up language models to billions of parameters has opened up possibilities for in-context learning, allowing instruction tuning and few-shot learning on tasks that the model was not specifically trained for. This has achieved breakthrough performance on language tasks such as translation, summarization, and question-answering. Furthermore, in addition to these associative "System 1" tasks, recent advances in Chain-of-thought prompt learning have demonstrated strong "System 2" reasoning abilities, answering a question in the field of artificial general intelligence whether LLMs can reason. The field started with the question whether LLMs can solve grade school math word problems. This paper reviews the rapidly expanding field of prompt-based reasoning with LLMs. Our taxonomy identifies different ways to generate, evaluate, and control multi-step reasoning. We provide an in-depth coverage of core approaches and open problems, and we propose a research agenda for the near future. Finally, we highlight the relation between reasoning and prompt-based learning, and we discuss the relation between reasoning, sequential decision processes, and reinforcement learning. We find that self-improvement, self-reflection, and some metacognitive abilities of the reasoning processes are possible through the judicious use of prompts. True self-improvement and self-reasoning, to go from reasoning with LLMs to reasoning by LLMs, remains future work. | It is found that self-improvement, self-reflection, and some metacognitive abilities of the reasoning processes are possible through the judicious use of prompts, suggesting that true self-improvement and self-reasoning, to go from reasoning with LLMs to reasoning by LLMs, remains future work. | ## Reasoning with Large Language Models, a Survey
Aske Plaat, Annie Wong, Suzan Verberne, Joost Broekens,
Niki van Stein, Thomas B¨ack
LIACS, Leiden University,
Netherlands
July 17, 2024
**Abstract**
Scaling up language models to billions of parameters has opened up possibilities for in-context learning, allowing instruction tuning and few-shot learning on
tasks that the model was not specifically trained for. This has achieved breakthrough performance on language tasks such as translation, summarization, and
question-answering. Furthermore, in addition to these associative “System 1”
tasks, recent advances in Chain-of-thought prompt learning have demonstrated
strong “System 2” reasoning abilities, answering a question in the field of artificial
general intelligence whether LLMs can reason.
The field started with the question whether LLMs can solve grade school math
word problems. This paper reviews the rapidly expanding field of prompt-based
reasoning with LLMs. Our taxonomy identifies different ways to generate, evaluate, and control multi-step reasoning. We provide an in-depth coverage of core
approaches and open problems, and we propose a research agenda for the near future. Finally, we highlight the relation between reasoning and prompt-based learning, and we discuss the relation between reasoning, sequential decision processes,
and reinforcement learning. We find that self-improvement, self-reflection, and
some metacognitive abilities of the reasoning processes are possible through the
judicious use of prompts. True self-improvement and self-reasoning, to go from
reasoning with LLMs to reasoning by LLMs, remains future work.
### 1 Introduction
Transformer-based Large Language Models (LLMs) that are trained on large datasets
have achieved breakthrough performance at next token prediction [Vaswani et al., 2017,
Radford et al., 2019, Wei et al., 2022a]; they are very good at natural language understanding (GLUE, SQUAD, Xsum) [Wang et al., 2018, 2019, Rajpurkar et al., 2016,
Narayan et al., 2018], translation [Kocmi et al., 2022, Papineni et al., 2002, Sennrich
et al., 2015], question answering [Tan et al., 2023], and other System 1 tasks [Kahne
-----
man, 2011].[1] The success of ChatGPT [Ouyang et al., 2022] has taken the world by
storm.
Transformer-based generative language models whose size is beyond hundreds of
billions parameters are not only very good at language generation, they also enable new
type of machine learning, called in-context learning [Brown et al., 2020]. In-context
learning, also known as prompt-based learning, occurs only in LLMs beyond a certain
size (hundreds of billions of parameters) that are sufficiently rich [Wei et al., 2022a].
In-context learning is inference time, prompt-based, few-shot learning, where model
parameters are not trained or fine-tuned.
System 1 tasks, such as associative language tasks, are easily solved by LLMs with
prompt-based learning, as the many school children around the world that use ChatGPT
daily can attest. (Although the problems are too often not solved correctly, just fluently,
when the model’s association powers lead to hallucination [Huang et al., 2023].) On
the other hand, System 2 tasks, such as grade school math word problems, are more
difficult for LLMs[Cobbe et al., 2021]. To solve math word problems we need to break
down the problem in multiple reasoning steps. Spurred-on by the impressive performance on System 1 tasks, much research has focused on understanding the reason for
the poor performance of LLMs on System 2 tasks, and how it can be improved.
Among this research, the Chain-of-thought experiment [Wei et al., 2022b] stands
out. This work, and subsequently Kojima et al. [2022], showed that adding a simple
instruction to the prompts, Let’s think step by step, can provoke an LLM to perform
the required intermediate reasoning steps, achieving a surprising jump in performance.
The Chain-of-thought paper is a breakthrough in the field of reasoning with LLMs.
Much exciting work has been published that builds on this work.
Grade school math word problems started the research into LLM-reasoning, with
the GSM8K benchmark [Cobbe et al., 2021]. In our survey we discuss papers based
on this benchmark, and directly-related follow up work on reasoning. We focus on
prompt-based approaches. We survey the recent literature using a straightforward taxonomy.
Although the field has only recently started, the jump in performance on reasoning
has excited artificial intelligence and society alike. We provide a research agenda with
opportunities for future research. At the end of this survey, we also discuss connections to other fields, such as self-reflection, metacognition (or thinking about thinking,
see for example Dunlosky and Metcalfe [2008]), and the motivation towards artificial
general intelligence.
Our contributions are:
- A survey of relevant approaches in prompt-based reasoning (grade school math
word problems and closely related domains) in large language models, including
a research agenda.
1In his book Thinking, fast and slow, a bestseller on human psychology, Daniel Kahneman described
System 1 thinking as a near-instantaneous process; it happens automatically, intuitively, and with little effort.
It is driven by instinct and experiences. System 2 thinking is slower and requires more effort. It is conscious
and logical. The automatic operations of System 1 generate surprisingly complex patterns of ideas, but only
the slower System 2 can construct thoughts in an orderly series of steps. In the LLM literature the terms are
often used as shorthand to distinguish single-step associative tasks, from multi-step reasoning tasks, despite
the fact that language tasks such as question answering and translation may require some “slow” thinking.
-----
- A taxonomy based on regular reasoning literature (step generation, step evaluation, and control of reasoning steps).
This survey is organized as follows. Section 2 summarizes the most relevant developments in LLMs, including in-context learning. Of great importance are the benchmarks
that are used in this field. We discuss these in Section 3, followed by our method for
scoping and selecting of papers in Section 4. Next, in Section 5 we provide a taxonomy
of the field, where we discuss the approaches in detail. Then, in Section 6 we discuss
our findings in a broader perspective. We also discuss the relation between reasoning
and work on self-reflection and metacognition. This section concludes with an agenda
for future research. Finally, Section 7 concludes the survey.
### 2 Background: Reasoning with LLMs
Before we dive into the works on reasoning, we review some background terminology
on LLMs. Our overview is brief. Excellent surveys on LLMs are, for example, Minaee
et al. [2024] and Zhao et al. [2023]. We discuss the generic training pipeline for LLMs,
we discuss how in-context learning works, and we discuss the reasoning pipeline. We
start with the generic language model training pipeline.
**2.1** **Training Pipeline Language Model**
LLMs are typically constructed in a sequence of stages, from data preparation, through
training, to inference. The training pipeline for most LLMs is quite elaborate. We will
now list a pipeline of the most common stages, based on the survey by Minaee et al.
[2024].
1. Acquire a large, general, unlabeled, high-quality text corpus. Some considerations on the selection of the texts are discussed in Brown et al. [2020].
2. Pretrain the transformer model [Vaswani et al., 2017] on this large corpus. This
step yields a generalist model. The pretraining is done using a self-supervised
approach on the unlabeled dataset (text corpus).
3. Finetune the general model to a specific (narrow) task. This can be done using supervised-learning with a new labeled dataset consisting of prompts and
answers (supervised finetuning, SFT) [Wei et al., 2022a, Minaee et al., 2024],
specific for the task at hand. (A small number of papers in this survey work in
the finetuning stage.)
4. Instruction tuning is a form of finetuning on a labeled dataset of instruction
prompts and corresponding outputs, to improve instruction following, and thus
the usefulness of models.
5. Align the finetuned model with user expectations (preference alignment). The
goal of this stage is to improve the model to give more ethically and socially
acceptable answers. The machine learning method that is used in this stage can
-----
be, for example, Reinforcement Learning with Human Feedback [Ouyang et al.,
2022] or Direct Preference Optimization [Rafailov et al., 2024].
6. Optimize training to improve cost-effectiveness, for example, with low-rank optimization [Hu et al., 2021], mixed precision training [Micikevicius et al., 2017],
quantization [Jacob et al., 2018], or knowledge distillation [Xu et al., 2024, Gu
et al., 2023].
7. Inference & In-context learning can be used to train the model to provide the
correct answers without changing parameters [Dong et al., 2022, Brown et al.,
2020]. By providing a prompt that contains a small number of examples together
with a question, prompt learning is a form of few-shot learning. This is the stage
in which most of the papers of this survey work, and that is familiar to all general
users of ChatGPT.
Most of the reasoning methods that we discuss in this survey work in stage 7: in-context
learning, using prompts for the LLM to perform a complex multi-step reasoning task.
The following section provides a brief introduction to in-context learning.
**2.2** **In-Context Learning**
In LLMs beyond hundreds of billions of parameters a new kind of learning has emerged,
that is called in-context learning or prompt-learning [Brown et al., 2020]. It occurs at
inference time, and is often able to give good results with few examples; it is a form of
few-shot learning. The large size of the model, containing rich and general knowledge
is enabling this new type of few-shot learning (see Dong et al. [2022] for a survey).
A prompt, consisting of a piece of demonstration context, is concatenated with a
query question, and is given to the language model for prediction [Liu et al., 2023]. For
example, when the task is emotion recognition in a social media post, “I missed the bus
today,” can be followed by “I felt so [ ]”. Alternatively, for translation, we could follow “I missed the bus today,” by “French: [ ]” [Liu et al., 2023]. The prompt contains
background information that is recognized by the model, selecting the desired model
context. In-context learning works when language models contain enough knowledge,
allowing them to generalize on the examples provided in the prompt.
Prompts that contain a few examples are said to perform few-shot learning. Prompts
that contain only instructions without examples are said to perform zero-shot learning.
In-context learning takes place at inference time, after the computationally intensive stages where parameters have been pretrained and finetuned, when the model is
queried by the user to provide answers. No parameters are changed anymore with
in-context learning. This is quite different from the common approach in supervised
deep learning—or self-supervised deep learning—where large datasets are used during
training to update model parameters with backward propagation in lengthy and costly
training epochs [Goodfellow et al., 2016]. Common approaches to few-shot learning,
such as metalearning, do include training and finetuning of parameters to achieve generalization, and are computationally expensive (see, for example, Finn et al. [2017] or
Huisman et al. [2021], Hospedales et al. [2021] for a survey).
-----
Prompts provide a user-friendly interface to LLMs. The success of in-context learning tends to be quite sensitive to the way a prompt is formulated; a new field called
_prompt engineering has emerged to optimize the usefulness of in-context learning by_
learning how to make them do what we want [Radford et al., 2019, Wei et al., 2022a,
Giray, 2023, Sahoo et al., 2024].
**2.3** **Reasoning Pipeline**
Reasoning problems are also solved with a pipeline of stages. A typical approach
to solving a complex problem is to subdivide it into smaller steps and solve those.
This approach is related to divide and conquer [Bellman, 1966]. New steps are (1)
_generated, (2) evaluated, and the number of steps that are generated and searched is_
(3) controlled in some way. The in-context reasoning approaches that we survey follow
a general three-stage pipeline [Madaan et al., 2023]:
1. Generate: generation of steps by the model,
2. Evaluate: evaluation of the predicted steps by an evaluator,
3. Control: control of the number of steps that are generated and how deep ahead
the reasoning process will look.
This three-stage pipeline will be the basis of our taxonomy. But first, we will look at
benchmarks.
### 3 Benchmarks
Progress in artificial intelligence is measured by benchmarks. Benchmarks define the
goal that researchers aim to achieve in their experiments. In natural language processing, a wide array of benchmarks exists to measure progress, such as on question answering (for example, CommonsenseQA [Talmor et al., 2018]), word prediction
(for example, LAMBADA [Paperno et al., 2016]), translation (for example, WMT’22
[Kocmi et al., 2022]), language understanding (for example, GLUE [Wang et al., 2018,
2019]), and text summarization (for example, Xsum [Narayan et al., 2018]). Transformer architectures were first popularized by encoder models such as BERT [Devlin
et al., 2018], for named entity recognition and classification tasks. Subsequently, decoder models such as GPT 2-4 [Radford et al., 2019, Brown et al., 2020, Achiam et al.,
2023] showed impressive progress on natural language benchmarks.
The field of LLMs is quite active. Many different benchmarks exist, and listing a
comprehensive overview of all relevant benchmarks is beyond the scope of this survey. We will mention relevant benchmarks for testing the reasoning abilities of LLMs.
Following Wei et al. [2022b], these are all math word problem benchmarks. The benchmark that is most frequently associated with reasoning by LLMs is a dataset of grade
school math word problems GSM8K [Cobbe et al., 2021]. GSM8K was created by
humans, with an aim of high quality, high diversity, moderate difficulty, and solutions
in natural language. Other benchmarks are the SVAMP varying structures benchmarks
[Patel et al., 2021], the ASDiv dataset of diverse math problems [Miao et al., 2021],
-----
the AQuA dataset of algebraic word problems [Ling et al., 2017], and the MAWPS
benchmark [Koncel-Kedziorski et al., 2016].
We will now briefly discuss these benchmarks; the baseline performance that we
quote is from Wei et al. [2022b].
**GSM8K** To test reasoning skills, the Grade School Math problem dataset (GSM8K)
was developed for testing LLMs [Cobbe et al., 2021]. It consists of 8500 humanwritten math problems. Language models struggled to achieve good performance on
this dataset (pre Chain-of-thought). An example of a math word task is:
_Problem: Beth bakes 4, two dozen batches of cookies in a week. If_
these cookies are shared amongst 16 people equally, how many cookies
does each person consume?
_Answer: 4 × 2 × 12/16 = 6._
The baseline performance of GPT-3 175B is 15.6% accuracy. In comparison, the
performance of Chain-of-thought is 46.9% accuracy.
**ASDiv** The Academia Sinica Diverse MWP Dataset (ASDiv) [Miao et al., 2021] is
specifically designed for high diversity in problem types, formats and difficulty levels.
It consists of 2305 problems. An example problem is:
_Problem: A sandwich is priced at 0.75. A cup of pudding is priced at_
0.25. Tim bought 2 sandwiches and 4 cups of pudding. How much money
should Tim pay?
_Answer: 0.75 × 2 + 0.25 × 4 = 2.5._
The baseline performance of GPT-3 175B is 70.3% accuracy. The performance of
Chain-of-thought is 71.3% accuracy.
**MAWPS** The Math Word Problem Repository (MAWPS) [Koncel-Kedziorski et al.,
2016] allows for the construction of datasets with particular characteristics by selecting
different categories of problems. The dataset consists of 3320 problems. An example
is:
_Problem: Rachel bought two coloring books. One had 23 pictures and_
the other had 32. After one week she had colored 44 of the pictures. How
many pictures does she still have to color?
_Answer: 55 −_ 44 = 11.
The baseline performance of GPT-3 175B is 72.7% accuracy. The performance of
Chain-of-thought is 87.1% accuracy.
**SVAMP** The Simple Variations on Arithmetic Math word Problems dataset (SVAMP)
was designed by Patel et al. [2021]. It consists of 1000 problems, curated from variations of ASDiv-a [Miao et al., 2021] and MAWPS [Koncel-Kedziorski et al., 2016].
An example problem is:
-----
_Problem: Jack had 8 pens and Mary had 5 pens. Jack gave 3 pens to_
Mary. How many pens does Jack have now?
_Answer: 8 −_ 3 = 5.
The baseline performance of GPT-3 175B is 65.7% accuracy. In comparison, the performance of Chain-of-thought is 68.9% accuracy.
**AQuA** The Algebraic Question Answering dataset [Ling et al., 2017] is a large dataset
of 100,949 questions, answers, and rationales. The dataset is based on a combination
of a smaller seed dataset and crowdsourcing. An example question is:
_Question: Two trains running in opposite directions cross a man stand-_
ing on the platform in 27 seconds and 17 seconds respectively and they
cross each other in 23 seconds. The ratio of their speeds is: Options: A)
3/7 B) 3/2 C) 3/88 D) 3/8 E) 2/2
_Answer: B._
The baseline performance of GPT-3 175B is 24.8% accuracy. The performance of
Chain-of-thought is 35.8% accuracy.
There is a wide variety of benchmarks, and there is a wide variety of performance
in benchmarks. Some are easily solvable by current LLMs, and some are significantly
harder. Benchmark design is an important part of the field of reasoning in LLMs.
Currently the GSM8K benchmark is popular; baseline model performance is weak,
and reasoning prompts can substantially improve performance. As performance on
GSM8K improves, different (harder) benchmarks will become popular.
### 4 Selection of Papers
The papers in this survey were selected as follows. Baseline LLMs have difficulty
solving math word problems, specifically on benchmarks listed in the previous section.
We take the ability to solve those benchmarks as a proxy for reasoning ability. We
initially performed a literature search for papers that use these benchmarks, and that
contain the search terms reasoning and large language model in their title or abstract.
We also searched for papers that referenced the Chain-of-thought paper. The resulting
papers were curated based on recency, relevance, substance, and novelty.
We favor recent papers (two years prior to the writing of the survey), related to
the Chain-of-thought approach of generating intermediate reasoning steps, that solve
tasks such as math word problems, and that work by prompt-based in-context learning.
We also include some papers that work by finetuning or supervised learning that relate
to, or inspire, the Chain-of-thought approaches. Furthermore, we include approaches
outside math word problems that showed interesting approaches to reasoning, such as
applications in coding and autonomous agents, because of their approach to grounding.
-----
### 5 Prompt Generation, Evaluation and Control
This survey examines how an architecture that is good at System 1 tasks can be prompted
to solve System 2 tasks. The Chain-of-thought paper showed how a simple command
could prompt an LLM to perform reasoning steps, yielding much better performance
in math word problems. Since then much research has further explored this approach,
trying to build the ultimate general problem solver for System 1 and System 2 problems.
Following the pipeline of Section 2.3, the prompts must (1) generate the reasoning
steps, (2) evaluate the answer to the steps, and (3) control the number of steps that are
generated, the shape (or complexity) of the reasoning process must be controlled. We
will now briefly discuss the three stages. Please refer to Figure 1 for a diagram of the
different approaches for the generation, evaluation, and control of reasoning steps, and
to Table 1.[2]
**Prompt for Step Generation** The first order of business is to create a prompt that
instructs the LLM to generate reasoning steps. The problem must be split into substeps.
This can be achieved with a problem-specific prompt that contains elements of the
problem, such as: “First calculate how many marbles Mary had originally, then how
many her friend had, and finally how many they had together.”
In general, it is possible to prompt an LLM to fill in the blanks in a step-by-step
fashion. In the papers that we will discuss, there are three main approaches for generating the step-by-step prompt. The prompt may be (1) handcrafted for the problem by
the researchers (hand-written prompt), or (2) the prompt or prompts may come from an
source that is external to the model, such as another model or a dataset (prompt using
_external knowledge), or (3) the model itself can be prompted to generate a (series of)_
prompt(s) to analyze the problem (model-generated prompt). As we will see, all three
approaches have their advantages and disadvantages.
Generating the subproblem-steps is the first stage that is necessary for in-context
learning to perform reasoning. Each paper in our survey performs at least this stage of
the reasoning pipeline. In some of the early papers (around 2022) it is the only stage
of the pipeline that is performed.
**Prompt for Result Evaluation** After the prompt has been generated and the model
has answered it, the next step in the reasoning pipeline is to evaluate the answer. Again,
we see three main approaches for substep evaluation. First, the steps may be evaluated
by (1) the model itself (self-assessment). Second, (2) an external program can be used
to evaluate the steps. For example, when the steps are expressed as computer code,
an external interpreter or compiler can be used to check the validity and the outcome
(tool-based evaluation). Finally, (3) an external model can be used, LLM or otherwise.
For example, in robotics, an external physics model can determine if certain actions are
physically possible (external model validation).
2We show the approaches in the Figure in their main category only. Some approaches show innovations
in two categories, and are shown twice. (Since all approaches have a generation, an evaluation, and a control
aspect, all could in principle occur three times—all three columns can be found in Table 1).
-----
Figure 1: Taxonomy of LLM-Reasoning Approaches: Prompt Generation, Evaluation,
and Control
-----
**Perform Control of Reasoning Steps** A reasoning process that consists of multiple
steps is a sequential decision process [Littman, 1996]. When a single chain of reasoning steps is generated, the control flow of the reasoning process is simple: greedily
evaluate the first step and then the next one, if present. The control flow of the reasoning process may also be more intricate. Some reasoning problems can be divided into
multiple subproblems. To execute, evaluate and combine the results of all substeps,
a separate controller may be needed. This controller can be a prompt or an external
algorithm.
Again, we distinguish three approaches. Most papers use (1) a greedy selection
approach: a single prompt with a single chain of steps is generated, and these steps are
directly executed and followed. The second approach (2) is to generate an ensemble
_strategy of reasoning steps, evaluate them, combine the individual results, and present_
them as the result of the ensemble. Finally, (3) a full tree-search or a reinforcement
_learning (RL) algorithm can be used as scaffolding. In this case, when a step is fol-_
lowed and evaluated, the LLM can roll back and try a different reasoning step. This
is a breadth-first search approach [Plaat, 2020]. Going further, a full reinforcement
learning approach can be used [Sutton and Barto, 2018, Plaat, 2022] to find an optimal
policy for the sequential decision process. A full Markov Decision Process of state, action, transition, and reward function is specified, and step control can become a process
where prompts are generated dynamically.
**Domain** Many papers are applied to math word problems (natural language descriptions of math problems). Math problems were the original inspiration for the experiments with reasoning in LLMs. Other application domains include autonomous agents,
robotic movement, generating computer programs, and playing computer games. We
will discuss these in more detail with the individual approaches.
**Taxonomy Table** Table 1 lists the papers of this survey. They are listed by the domain
they work on, the type of prompt generation, the evaluation of the result, and the control
method. The approaches in the table are grouped, divided by horizontal lines.
The first group, from Scratchpad to Self-ask, focuses on creating a prompt that generates the reasoning steps. The entries in the cells of this column are shown in bold,
highlighting the focus of the approaches. The approaches in this group can be considered to be the start of the field of LLM-reasoning. The Chain-of-thought approach is
especially an inspiration for many works. The prompts are often written “manually” by
the researchers, the steps are encoded in one prompt, and step control is greedy. There
is no specific evaluation of the steps, other than comparing results to the benchmark.
The Scratchpad approach is special in that it uses supervised learning, not promptlearning; the work showed that LLMs can be made to generate internal reasoning steps
by supervised learning, paving the way for the later prompt-based papers.
The second group, from Self-verification to Self-taught-reasoner, focuses on evaluation of the reasoning steps in the prompt. This column is shown in bold in the table.
The approaches in this group aim to improve the Chain-of-thought results by reducing the error accumulation that occurs when multiple steps are taken in a reasoning
chain. A variety of step control methods is used by these approaches, which is dis
10
-----
Table 1: Taxonomy of approaches: Generation, Evaluation, and Control
**Approach** **Domain** **Step generation** **Step evaluation** **Step control**
Scratchpad [Nye et al., 2021] math word **hand-wr/supervised** - greedy/1prompt
Chain-of-thought [Wei et al., 2022b] math word **hand-written** - greedy/1prompt
ZS-CoT [Kojima et al., 2022] math word **hand-written** - greedy/1prompt
Auto-CoT [Zhang et al., 2022] math word **model-generated** - clustering
Complexity [Fu et al., 2022] math word **hand-written** self-consistency greedy/1prompt
Self-ask [Press et al., 2022] math word **external knowledge** LLM multi-hop questions
Self-verification [Weng et al., 2022] math word hand-written **back-verify** ensemble
Self-consistency [Wang et al., 2022b] math word hand-written **majority** ensemble
Codex [Chen et al., 2021] code - **tool-based** -
Self-debugging [Chen et al., 2023] code hand-written **tool-based** greedy
Fun-search [Romera-Paredes et al., 2024] code hand-written **tool-based** evolutionary algorithm
LLaMEa [van Stein and B¨ack, 2024] code hand-written **tool-based** evolutionary algorithm
MathPrompter [Imani et al., 2023] math hand-written **tool-based** ensemble
Program-of-thoughts [Chen et al., 2022] math word hand-written, Codex **Python+Consist.** decouple reason/compute
Program-aided-language [Gao et al., 2023] math word hand-written, Codex **NLP/Python** ensemble
Refiner [Paul et al., 2023] math word finetune **critic model** gen/crit feedback
Self-corrector [Welleck et al., 2022] math word finetune **corrector model** gen/corr feedback
Self-improvement [Huang et al., 2022a] math word finetune **self-assessment** CoT/consistency
Say-can [Ahn et al., 2022] robot model-generated **external model** greedy
Inner-monologue [Huang et al., 2022b] robot hand-written **various** greedy
Self-taught-reasoner [Zelikman et al., 2022] math word finetune **augmentation** greedy/feedback
Least-to-most [Zhou et al., 2022] math word hand-written self-assessment **curriculum**
Progressive-hint [Zheng et al., 2023] math word model-generated self-assessment **stable prompt**
Self-refine [Madaan et al., 2023] math word model-generated self-assessment **greedy/feedback**
Tree-of-thoughts [Yao et al., 2024] puzzles model-generated self-assessment **BFS/DFS**
Buffer-of-thoughts [Yang et al., 2024] math word thought template self-assessment **buffer manager**
Beam-search [Xie et al., 2024] math word model-generated self-assessment **Beam Search**
ReAct [Yao et al., 2022] action external knowledge self-assessment **reinforcement learning**
Reflexion [Shinn et al., 2024] decision model-generated ext model **reinforcement learning**
Voyager [Wang et al., 2023] Minecraft model-generated Minecraft **reinforcement learning**
11
-----
cussed in more detail later. Note that not all approaches use natural language problems
(often math word problems). For example, the subgroup of Codex to Program-aidedlanguage focuses on formal languages. They generate code or math equations, typically in Python, to formalize the steps of the reasoning problem, or as result of the task.
LLMs are quite good at code generation, and these approaches typically achieve good
performance. The use of code also allows the approaches to call external programs
such as interpreters and debuggers to evaluate the correctness of the reasoning steps
that are generated.
There is also a special subgroup, Refiner to Self-improvement, that does not use
prompt learning but finetuning. Here, new data is generated based on reasoning exemplars, which is then used to further train the model. The extra data is often generated
as a separate dataset, sometimes called critic or corrector.
There are two approaches, Say-can and Inner-monologue, whose application domain is control of robot movement. Robotic movement is constrained by the laws of
physics (both in the body of the robot as in aspects of its environment). The laws of
physics are learned and used to ground the reasoning steps in reality (to reduce hallucination).
The third group, Least-to-most to Voyager, addresses step control (approaches
shown in bold in this column). Whereas in the previous approaches the reasoning steps
are written in a single, static, prompt, these approaches generate the steps in multiple,
dynamic, prompts. This allows control of the space of reasoning steps. Various search
control approaches are used, all in the form of an external algorithm that performs calls
to the LLM with different prompts. The control methods range from simple greedy and
depth-first search to elaborate beam search and reinforcement learning schemes.
In summary, we see a diverse array of methods that often achieve high performance
in reasoning about their respective domains. To better understand the approaches, let
us discuss the techniques in more detail, starting with the generation of steps.
**5.1** **Generation of Steps**
Originally, LLMs performed poorly on math word problems (GSM8K [Cobbe et al.,
2021]). Some different approaches were tried, for example scaling up the size of the
LLM [Rae et al., 2021]. The LLM architecture, based on transformers, is designed to
produce a single token. When we prompt such an architecture to produce an answer,
it does so. What we should do is prompt it to follow intermediate steps, answer those,
and thus work towards the final answer, just as a student is taught to break down a
complex problem into smaller steps. We should take the model by its hand and teach it
to write down the intermediate steps, and combine the intermediate results [Nye et al.,
2021].
This idea was used by Nye et al. [2021] in Scratchpads, a transformer model that
performs multi-step computations by asking it to emit intermediate computation steps
into a scratchpad. They train the model by supervised learning (not prompt-based
in-context learning). Figure 2 shows an example. On experiments with addition, polynomial evaluation, and Python code execution, versions that produced the intermediate
steps on a scratchpad performed considerably better than versions that did not.
12
-----
Figure 2: Example of input and target for supervised learning on a long addition problem of adding two numbers. The carry is recorded in the C: digit. Comments (after #)
are not part of the learning target [Nye et al., 2021]
If supervised learning can produce intermediate steps, would prompt-learning be
able to do so too?
**5.1.1** **Hand-written Prompt**
This question was studied by Wei et al. [2022b], amongst others. A basic way to instruct an LLM to generate steps by prompt-learning is to manually write a prompt for
the large language model to follow the reasoning steps. They showed in their Chain-ofthought paper that with such a prompt the LLM follows such intermediate steps. When
the LLM is prompted to rephrase information from the question as intermediate reasoning steps in its answer, the LLM performed much better than when it was prompted
to answer a math problem directly, without reproducing the information from the question in its answer. The example from the Chain-of-thought paper is shown in Figure 3
Wei et al. [2022b]. Performance figures were given in Section 3 on benchmarks.
The substantial performance improvement by Chain-of-thought has caused much
excitement and has opened up further research on reasoning with LLMs. In the original Chain-of-thought paper the prompts were handwritten by the researchers for the
individual types of problems, and evaluations are conducted with five different benchmarks (not by an LLM).[3] In a later work the prompts were generated automatically by
the LLM [Zhang et al., 2022].
Kojima et al. [2022] go a step further. They show that the simple addition of a
single text to the prompt (Let’s think step by step) significantly improves performance.
Since this text does not contain problem-related elements, this can be considered as a
form of zero-shot learning. Figure 4 compares the approaches. Experiments further
show that with this addition to the prompt, significant performance gains are achieved
3The Chain-of-thought idea is about prompt generation, not about the evaluation or the search control of
the reasoning steps. Hence, in Table 1 Chain-of-thought is labeled as greedy without an evaluation.
13
-----
Figure 3: Chain of Though Prompting. In blue at the top the prompt, in green at the
bottom the answer. When shown the longer example prompt, the LLM follows the
longer example when answering the question [Wei et al., 2022b].
Figure 4: Zero-shot Chain-of-thought: Let’s think step by step [Kojima et al., 2022]
14
-----
Figure 5: Self-Ask asks follow-up questions, and uses an external search engine [Press
et al., 2022]
on a diverse set of reasoning benchmarks, including arithmetic, symbolic, and logical
reasoning.
The Chain-of-thought idea itself is inspired by earlier work where natural language
steps are generated for arithmetic reasoning [Ling et al., 2017, Cobbe et al., 2021],
and the use of formal languages for reasoning [Roy and Roth, 2016, Chiang and Chen,
2018, Amini et al., 2019, Chen et al., 2019].
**5.1.2** **Prompt using External Knowledge**
Chain-of-thought shows that an LLM gives better answers to complex problems when
it is guided to take individual steps. Prompts are written manually, from scratch, by the
researchers.
We can use external information about the problem to improve the prompt. Press
et al. [2022] study how subproblems are related to the main problem, which they call
_compositional reasoning. They study how often a model is able to answer the sub-_
problems, but not the overall problem. This difference is called the compositionality
gap. They find that in GPT-3, as model size increases, the compositionality gap does
not decrease: the single-hop question-answering performance improves faster than the
multi-hop performance. This shows that while more powerful models memorize and
recall more factual knowledge, no improvement in their ability to perform compositional reasoning occurs. They find that the ability to reason does not depend on the size
of the model.
Subsequently, a method called Self-ask is proposed, that asks elicitive follow-up
15
-----
questions (like Chain-of-thought, but with the follow up: prompt), see Figure 5. The
model is then used to answer these follow-up questions. Self-ask can also use an external search engine to answer intermediate prompts, instead of the model. The model
takes as input a compositional question which it decomposes. The initial subquestion is
fed into the search engine, and the answer is processed by the model, which generates
another subquestion, and so on, until it produces the final answer.
The approach performs a few percentage points better than vanilla Chain-of-thought
on three benchmarks that were specifically designed for multi-hop questions.
**5.1.3** **Model-Generated Prompt**
In addition to manually writing prompts or using external information, we can also try
to let the LLM itself study the problem to write the best reasoning-prompt, a form of
self-improvement. An example of this approach is Auto-chain-of-thought [Zhang et al.,
2022]. This approach builds on the observation by Kojima et al. [2022] that large language models are zero-shot reasoners. First, Auto-chain generates specific questions
for a given dataset and partitions them into clusters. Then an external algorithm uses
the model to generate examples that are sampled for diversity. The constructed demonstrations augment the in-context prompt. The automatically generated prompts are
reported to perform as well or better than the hand-written Chain-of-thought prompts
on ten benchmarks using GPT-3.
Fu et al. [2022] introduce Complexity-based prompting. Inspired by Chain-ofthought and Self-consistency, this work studies which prompts achieve the best results
on math word and other reasoning problems. Their work specifically studies the impact
of the complexity of the reasoning chain, and introduces a related reasoning approach
(Complexity-based prompting). They find that prompts with the largest complexity
(the most reasoning steps) perform best. Further, they find that outputs (answers) with
the highest complexity are the best. Complexity-based prompting achieves high performance on three math reasoning benchmarks.
Another approach that uses model-generated prompts is Buffer-of-thoughts. We
will discuss this approach in Section 5.3.3.
**5.2** **Evaluation of Steps**
After discussing prompts for the generation of reasoning steps, the next stage in the
reasoning pipeline (Section 2.3) is evaluation of the results of the steps, to reduce the
error of multi-step reasoning chains.
We will start with approaches where the same model performs step-generation and
step-evaluation.
**5.2.1** **Self-Assessment**
When LLMs are prompted to perform reasoning steps, they perform a sequence of steps
and predict multiple tokens. Performing a sequence of steps makes them sensitive to
mistakes and vulnerable to error accumulation [Weng et al., 2022, Xiao et al., 2023a].
Several methods have been developed to prevent error accumulation. One approach is
16
-----
Figure 6: Self-Consistency [Wang et al., 2022b]
to create a new model to separately evaluate the results. Shen et al. [2021] and Li et al.
[2022b] train an external verifier to check results.
In contrast, Weng et al. [2022] propose an automated approach using evaluation by
the same LLM, called Self-verification. They note that human reasoning also suffers
from the problem of accumulating errors, and that in human reasoning we frequently
revisit our thought process to verify the accuracy of our reasoning steps. Thus, they
propose to apply such a backwards self-verification approach. The LLM is prompted
to use the conclusion of the Chain-of-thought reasoning chain as a condition for solving
the original problem and then compare the answer going back to the original question.
The LLM is given variations of its own conclusion and is instructed to choose the one
with the highest similarity to the original question. (Note that there can be feedback
issues using an LLM to evaluate itself, for a discussion see Zheng et al. [2024].) Experiments are reported on GPT-3 [Chen et al., 2021] and on Instruct-GPT [Ouyang et al.,
2022]. The performance of Chain-of-thought was improved by a few percentage points
on arithmetic and general reasoning tasks.
A popular related approach is called Self-consistency [Wang et al., 2022b]. Selfconsistency is a straightforward ensemble approach. Greedy single-path decoding is
replaced by sampling diverse reasoning paths, evaluating them, and selecting the most
consistent answer. Self-consistency asks the LLM to simply perform the same query
multiple times, and takes the majority-vote of the answers. Self-consistency works
since complex reasoning problems typically allow different reasoning paths that lead
to the correct answer. Figure 6 summarizes the approach.
Self-consistency has been evaluated on arithmetic reasoning, commonsense reasoning and symbolic reasoning, on a variety of LLMs, including GPT-3 [Tay et al., 2022,
Brown et al., 2020, Thoppilan et al., 2022, Chowdhery et al., 2023]. Self-consistency
improves the performance of Chain-of-thought typically by 10-20 percentage points,
and has been used as a baseline in many of the other approaches in this survey. (Selfverification also reports that performance is improved when used in combination with
17
-----
Self-consistency [Wang et al., 2022b] and with Program-aided-language [Gao et al.,
2023].)
**5.2.2** **Tool-based Validation**
Another possibility to improve the accuracy of evaluating the reasoning steps is to
switch from a natural to a formal language. The advantage of a formal language is
that it is less ambiguous than a natural language. Examples are computer languages,
such as Python, or mathematical equations. Using a formal language for reasoning is a
popular approach, and we discuss seven papers. Many approaches generate the steps in
Python, and the code can then be evaluated by a formal evaluator, such as a compiler,
debugger, or interpreter.
LLMs have been quite successful in generating computer code from natural language prompts. Chen et al. [2021] introduced Codex, a GPT model that was trained
on publicly available code in the repository GitHub. A production version of this work
was introduced under the name GitHub Copilot. Codex is able to generate correct programs from descriptions in natural language, such as commentary strings. Figure 7
shows examples that are produced by Codex.
The work on Codex is used as a basis for further research on reasoning in LLMs.
Human programmers, when writing code, typically follow a cycle of writing some
code, executing it to look for errors, and then using the feedback to improve the code.
This same approach is followed in the Self-debugging work [Chen et al., 2023]. Selfdebugging teaches a large language model to debug its generated program code via
few-shot demonstrations. It follows the same steps of (1) code generation, (2) code
execution, and (3) code explanation (see Figure 8).
Self-debugging is able, without human feedback on the code’s correctness or error
messages, to identify mistakes in the code that was generated by itself from investigating the execution results. Self-debugging can also provide an explanation of the
generated code in natural language. It achieves strong performance on text-to-SQL
generation, C++-to-Python transcoding, and text-to-Python generation.
Several works use self-debugging to generate working code tuned for solving specific problems automatically, without human feedback. Romera-Paredes et al. [2024]
introduced FunSearch, an approach that integrates formal methods and LLMs to enhance mathematical reasoning and code generation. FunSearch is capable of producing
functionally correct programs that adhere to specified requirements. It uses a genetic
algorithm approach with multiple populations of candidate solutions (programs), which
are automatically evaluated (using tools depending on the problem specification). In
addition to the problem specification in the form of an evaluate function, also an initial
program is given to the LLM in the first prompt. After evaluating a number of generated programs from the starting prompt, a new prompt using ‘best-shot prompting’
is created in an iterative fashion, combining a selection of k sampled programs in a
sorted list (ascending according to their evaluation score), and the LLM is requested to
generate program k + 1. Another work leverages evolutionary computation methods
to generate and optimize evolutionary algorithms [van Stein and B¨ack, 2024]. This approach, LLaMEA (Large Language Model Evolutionary Algorithm), utilizes LLMs to
design and optimize evolutionary algorithms. The approach uses LLMs to generate ini
18
-----
Figure 7: Codex [Chen et al., 2021]
Figure 8: Self-Debugging [Chen et al., 2023]
19
-----
Figure 9: Program-aided-language [Gao et al., 2023]
tial algorithmic structures, which are then refined through mutation and selection. This
enhances the efficiency of algorithm design, particularly in fields requiring innovative
and adaptive solutions. A key difference between FunSearch and LLaMEA is that
LLaMEA uses a sample-efficient elitism strategy by iteratively improving the best-sofar solution, requiring significantly fewer prompt evaluations than the large-population
strategy proposed in FunSearch.
To improve prompt-based reasoning, Codex is used in an ensemble approach named
MathPrompter [Imani et al., 2023]. This approach generates multiple algebraic expressions or Python functions, which then solve the same math problem. The results are
compared, just like in Self-consistency and Self-verification, raising the confidence
level in the results. MathPrompter achieved state-of-the-art results on the MultiArith
dataset (78.7% → 92.5%), evaluated on GPT-3 175B.
Two other approaches that use a formal language are Program-of-thought (PoT)
[Chen et al., 2022] and Program-aided-language (PAL) [Gao et al., 2023]. Both approaches use the LLM to generate Python and then use a Python interpreter to evaluate
the result. PoT and PAL are similar approaches. PoT uses benchmark-specific prompts;
PAL uses generic prompts, and has been tested on more benchmarks and has been used
in other approaches. Figure 9 illustrates the PAL approach.
When the evaluation of the reasoning steps is offloaded to the Python interpreter,
decomposing the natural language problem into executable code-steps remains the
only task for the LLM. (Earlier work in mathematical word problems, such as Ling
et al. [2017], showed how to decompose a problem and reach an answer.) Gao et al.
20
-----
Figure 10: Refiner [Paul et al., 2023]
[2023] provide extensive experimental evidence about the synergy between the neural
LLM and the symbolic interpreter. Experiments are performed over 13 mathematical,
symbolic, and algorithmic reasoning tasks, achieving more accurate results than much
larger models.
**5.2.3** **External Model Validation**
We have seen many successful examples of prompt-based in-context reasoning and
evaluation. We will now look at related reasoning approaches that follow a more traditional parameter learning approach. We describe three natural language approaches
that follow this route. All approaches evaluate the output of the model and generate
corrective data. That data is then added to the training pipeline, and the model is subsequently finetuned.
**Finetuning** The Refiner approach [Paul et al., 2023] uses a generator model and a
critic model to provide fine-grained feedback on reasoning errors. The generator generates multiple reasoning hypotheses, the critic evaluates results by randomly selecting
a hypothesis for feedback. The generator model is finetuned based on its reasoning errors. A small supervised model is used to overcome the cold-start problem. Figure 10
shows an example of how the critic provides feedback to the generator. The approach
is reported to work well on math word problems and synthetic natural language reasoning.
21
-----
Figure 11: Self-Taught-Reasoner [Zelikman et al., 2022]
Welleck et al. [2022] follow a similar approach in their Self-correction approach.
The corrector is a separate model specialized in refining the outputs of the generator.
Unlike Refiner, where the generator is finetuned based on the critic feedback, Selfcorrection finetunes the corrector to rectify errors in the hypotheses produced by the
generator.
A third finetuning approach is Self-improvement, by Huang et al. [2022a]. Here too
the base model data is augmented by LLM-generated rationales, and then finetuned.
Noteworthy in all three finetuning approaches is that LLMs are capable of improving themselves by training on their own generated output, and that stability problems
inherent in feedback loops are overcome.
**Dataset Augmentation** The final finetuning approach that we discuss uses dataset
augmentation. An explicit intermediate reasoning is called a rationale. Rationale generation has been shown to be valuable for LLMs across diverse tasks such as mathematical and commonsense reasoning, code evaluation, social bias inference, and natural
language inference [Zelikman et al., 2022]. Zelikman et al. [2022] describe how reasoning steps are used to create rationales, that are then used to augment the dataset on
which the model is finetuned. The approach is called Self-taught-reasoner. Figure 11
illustrates the approach. In Self-taught-reasoner, an augmentation dataset is created by
attempting to solve the original dataset using the current model’s rationale generation
ability in each iteration. Next, the dataset is augmented using rationalizations, using
ground-truth answers to problems the model failed to solve. Finally, the large language
model is finetuned on the combined dataset.
**Reasoning about Robot Behavior** In addition to math word problems, prompt-based
reasoning has also been used to reason about robot behavior. Language models contain
a large amount of information about the real world [Ahn et al., 2022]. In theory, this
should allow the model to exhibit realistic reasoning about robotic behavior. However,
the models do not have knowledge about particular embodied aspects of a particular
robot. If we could compare a Scratchpad-like list of intermediate reasoning steps with
22
-----
Figure 12: Say-Can compared to other language models [Ahn et al., 2022]
a list of possible movements of the robot in its environment, then we could prevent the
model from suggesting impossible joint movements and actions, and prevent accidents.
Such an approach has been tried in the Say-can paper [Ahn et al., 2022]. Saycan learns a value function [Kaelbling et al., 1996] of the behavior of a robot in an
environment using temporal difference reinforcement learning Sutton [1988]. This
value function is then combined with prompt-based reasoning by the language model,
to constrain it from suggesting impossible or harmful actions.
The goal of Say-can is to ground language in robotic affordances. In contrast to
Scratchpad, which used supervised learning, the affordance model is learned interactively by reinforcement learning, and then applied using prompt-based learning by the
LLM. The robot can act as the language model’s hands and eyes, while the language
model has high-level semantic knowledge about the task. The LLM (Say) provides a
task-grounding to find the actions to achieve the high-level goal. The learned affordance function (Can) provides a world-grounding to allow what is possible. Say-can
is evaluated on 101 real-world robotic tasks, such as how to solve tasks in a kitchen
environment (see Figure 12).
Where Say-can learns affordance as a separate function, another approach, Innermonologue [Huang et al., 2022b] formulates robotic planning directly as part of the language prompt. This approach incorporates environmental information into the prompt,
linguistically, as an inner monologue. As in Say-can, the information comes as feedback from different sources. Unlike Say-can, the information of physics and the world
is inserted directly into the prompt.
Inner-monologue consists of many elements: it uses InstructGPT [Brown et al.,
2020] for multi-step planning, scripted modules for object recognition, success detection, task-progress scene description, and language-conditioned pick-and-place primitives, similar to CLIPort [Shridhar et al., 2022]. These elements generate textual descriptions that are used in prompt-based learning. Figure 13 gives an example of the
working of Inner-monologue.
The language feedback that is thus generated significantly improves performance
on three domains, such as simulated and real table top rearrangement tasks and manipulation tasks in a kitchen environment. There are many studies into robotic behavior. A
recent approach related to Inner-monologue is Chain-of-tools, which proposes a plan
23
-----
Figure 13: Inner-Monologue [Huang et al., 2022b]
execute-observe pipeline to ground reasoning about tool behavior [Shi et al., 2024a,b].
This concludes our discussion of the second stage of the reasoning pipeline, evaluation of the reasoning steps.
**5.3** **Control of Steps**
The third stage in the reasoning pipeline in Section 2.3 is reasoning control. This stage
controls how many sub-steps are generated, and how deep into the future the reasoning
chain is generated.
There are three main approaches: (1) greedy selection, which generates a step and
then follows it, (2) ensemble strategy, which generates a set of possible next steps, and
(3) a full tree-shaped search which generates multiple options for the step, and follows
them multiple steps into the future, traversing a search tree with backtracking, controlling an exponential search space. We include reinforcement learning approaches, that
interactively learn an optimal policy for such a reasoning space.
**5.3.1** **Greedy Selection**
Most earlier works on prompt-based reasoning follow the greedy approach: generate a
single prompt with a sequence of steps and follow them. Among the greedy reasoners
are Chain-of-thought, Auto-CoT, and Zero-shot CoT. Inner Monologue and Say-Can
also use greedy reasoning.
In Least-to-most prompting [Zhou et al., 2022], the key idea is to break down a
complex problem into simpler subproblems and then solve these in sequence, explicitly
encoding them in the prompt. It is related to Complexity-based prompting. In Least-tomost, finding the answer to each subproblem is facilitated by the answers to previously
solved subproblems, as in a curriculum [Bengio et al., 2009]. The authors find that on
24
-----
Figure 14: Least-to-most prompting [Zhou et al., 2022]
symbolic manipulation, compositional generalization, and math reasoning, the Leastto-most prompting is capable of generalizing to more difficult problems than those that
are given in the prompts. Figure 14 illustrates the idea.
**5.3.2** **Ensemble Strategy**
The second kind of reasoning control is based on an ensemble of (sequences of) reasoning steps. The ensemble approach is a well-known technique in machine learning to make a strong learner out of multiple weaker learners [Sagi and Rokach, 2018,
Breiman, 2001]. For most problems, multiple different options for the next step exist. When all or some of these are generated and evaluated, then the best result or
the consensus result can be reported as the outcome of an ensemble of steps. Various
approaches have been proposed.
We already mentioned Self-consistency [Wang et al., 2022b] and Self-verification
[Weng et al., 2022] in Section 5.2.1. They are popular ensemble approaches to evaluate
the results of reasoning steps in prompt learning. The greedy single-path decoding used
in Chain-of-thought prompting is replaced by sampling a diverse set of reasoning paths,
evaluating them, and selecting the most consistent answer.
In another domain Chain-of-experts builds on Chain-of-thought with a mixture of
experts ensemble for complex combinatorial operations research problems [Xiao et al.,
2023b]. PAL and MathPrompter also use the ensemble approach. They generate multiple steps, which are evaluated and whose answer is combined, or the best step is
chosen.
The ensemble approach is a popular approach in LLM-reasoning.
25
-----
**5.3.3** **Reinforcement Learning**
In the greedy approach, a single reasoning path is generated and traversed. In reasoning, often multiple valid reasoning steps are possible, but pursuing all possibilities over
multiple reasoning steps may lead to an infeasible number of possibilities.
The third kind of reasoning control is to use a full-fledged controller that can traverse a tree, or even perform reinforcement learning to do so [Sutton and Barto, 2018,
Kaelbling et al., 1996, Plaat, 2022]. This group of control approaches enables the most
elaborate control of the reasoning process, and is used by many works, as we will see.
When decomposing the problem, multiple alternative steps are generated that can be
searched multiple steps into the future. Then, backtracking can be performed, allowing
alternative steps to be tried.
Where greedy and ensemble processes can be controlled with a prompt by the LLM,
this third group is more complex, and an external algorithm is used to control the
reasoning process. The external algorithms call the LLM as a subroutine prompting it
to perform its tasks. This allows more complex reasoning control, but we are no longer
performing prompt-based self-reasoning; control has been given to an algorithm that is
external to the LLM and external to prompt-learning.
We start our discussion of control strategies with depth-first and breadth-first search,
then go to beam search, and then to full reinforcement learning.
**Breadth first search** A complex reasoning space can be traversed with a search algorithm. Tree-of-thoughts includes a search algorithm to dynamically follow different
reasoning steps [Yao et al., 2024]. When one reasoning path has been traversed, a
search algorithm can backtrack, and try an alternative path. The paper describes both a
breadth-first-search and a depth-first-search controller.
The evaluation part in Tree-of-thoughts is performed with a prompt by the LLM.
Together, the trio of generation, evaluation, and control allow systematic exploration of
the space of reasoning steps with look-ahead and backtracking. The authors compare
their approach to Chain-of-thought and Self-consistency. Chain-of-thought builds a
reasoning out of a path of thoughts, Self-consistency creates an ensemble of thoughts,
and Tree-of-thoughts constructs a tree structure. Figure 15 illustrates the different reasoning structures.[4]
Another approach, Buffer-of-thoughts [Yang et al., 2024], goes a step further towards meta-reasoning. It introduces a meta-buffer that stores high-level thought-templates.
These universal thought-templates are derived from a variety of tasks. Figure 16 compares the Buffer-of-thoughts approach to other approaches such as Chain-of-thought
and Tree-of-thoughts. Buffer-of-thoughts outperforms other methods in puzzles such
as Game of 24 and checkmating. Thought templates are related to metacognition
(thinking about thinking), which is further discussed in Section 6.2.3.
**Beam search** A related search method is Beam-search. Beam-search-for-reasoning
[Xie et al., 2024] focuses on control of the space of possible reasoning paths. In some
4A similarly named approach is Graph-of-thoughts [Besta et al., 2024]. Graph-of-thoughts allows more
general reasoning graphs, providing a formal framework, where the different elements can then be specified
manually.
26
-----
Figure 15: Reasoning structure of Chain-of-Thought, Self-Consistency, and Tree-ofThoughts [Yao et al., 2024]
Figure 16: Chain-of-Thought, Self-Consistency, and Buffer of Thoughts [Yang et al.,
2024]
27
-----
Figure 17: Self-evaluation in multi-step reasoning in Beam-Search [Xie et al., 2024]
Figure 18: Reinforcement Learning [Sutton and Barto, 2018]
reasoning problems, this space can be very large. Beam-search solves this challenge
by searching only a promising part of this space. It uses self-evaluation to control
exploration and to evaluate (decode) reasoning steps. Figure 17 shows how Beamsearch self-evaluation is used in multi-step reasoning. Beam search uses Programaided-language models for math word problems [Gao et al., 2023]. Using a Codex
backbone [Chen et al., 2021], it surpasses the few-shot baselines by 6.34%, 9.56%, and
5.46% on the GSM8K, AQuA, and StrategyQA benchmarks, respectively.
**Reinforcement learning** Reinforcement learning (RL) methods are another step in
the sophistication of optimization algorithms. RL learns by interactive sampling, improving its policy based on rewards from the environment [Sutton and Barto, 2018].
To use reinforcement learning, the reasoning problem must be formulated as a Markov
Decision Process: the agent-algorithm creates a prompt (an action), to sample a step
(t) and get an answer (state, reward) from the environment-model (see Figure 18).
The answer can then be used to improve the prompt (next action), just like reinforcement learning uses rewards to improve its policy of best actions for each state. The
approaches that use reinforcement learning do so in the form of an external algorithm.
No prompt has been created that performs RL by itself.
Progressive-hint-prompting (PHP) uses reinforcement learning to interactively improve prompts [Zheng et al., 2023]. Figure 19 illustrates the approach. PHP is an
28
-----
Figure 19: Progressive Hint Prompting [Zheng et al., 2023]
external algorithm that calls the LLM with dynamic prompts, using previously generated answers as hints to progressively prompt the LLM toward the correct answers. It
works as follows: (1) given a question (prompt), the LLM provides a base answer, and
(2) by combining the question and answer, the LLM is queried and we obtain a subsequent answer. We (3) repeat operation (2) until the answer becomes stable, like a regular policy-optimizing reinforcement learning algorithm. The authors have combined
PHP with Chain-of-thought and with Self-consistency. Using GPT-4, state-of-the-art
performance was achieved in grade school math questions (95%), simple math word
problems (91%) and algebraic question answering (79%).
Another approach that is motivated by improving answers from feedback, is Selfrefine [Madaan et al., 2023]. In this method, initial outputs from LLMs are improved
through iterative feedback and refinement. Like PHP, the LLM generates an initial output and provides feedback for its answer, using it to refine itself, iteratively. Figures 20
and 21 illustrate the approach.
Self-refine prompts the LLM in three ways: (1) for initial generation, (2) for feedback, and (3) for refinement. Note that Self-refine follows a greedy reasoning chain,
learning from feedback. Self-refine has been used with GPT-3.5 and GPT-4 as base
LLMs, and has been benchmarked on dialogue response generation [Askari et al.,
2024], code optimization, code readability improvement, math reasoning, sentiment reversal, acronym generation, and constrained generation, showing substantial improvements over the base models.
Another approach that combines reinforcement learning and LLMs is ReAct [Yao
et al., 2022]. Most works so far have focused on reasoning by the LLM, not on actions by an agent. A key element of reinforcement learning is that it learns a policy for
29
-----
Figure 20: Self-Refine [Madaan et al., 2023]
Figure 21: Self-Refine [Madaan et al., 2023]
an environment. The goal of ReAct is to combine progress in reasoning with action
plan generation. (Or, to put it differently, most approaches use RL to improve LLMreasoning, ReAct uses LLMs to improve RL agent policies.) ReAct uses Chain-ofthought prompt-learning as part of an RL framework that also uses external knowledge
sources (Wikipedia) and finetuning, for error reduction, grounding, and for reducing
hallucination. The framework allows hand-written prompts. Figure 22 shows four
different prompting strategies. On two interactive decision making benchmarks (ALFWorld and WebShop), ReAct outperforms imitation and reinforcement learning methods by an absolute success rate of 34% and 10% respectively, while being prompted
with one or two in-context examples.
The ReAct work has been developed further. Reflexion [Shinn et al., 2024] is built
on top of ReAct. The goal is to create AI agents that learn by reflecting on failures and
enhancing their results, much like humans do. Reflexion uses three language models:
actor, evaluator, and reflector. It works as follows: (1) an actor generates text and
actions, (2) an evaluator model scores the outputs produced by the actor, and (3) a selfreflection model generates verbal reinforcement cues to assist the actor to self-improve
(see Figure 23). For the actor, Chain-of-thought [Wei et al., 2022b] and ReAct [Yao
et al., 2022] can be used. Reflexion is evaluated on decision-making, reasoning, and
coding tasks. Improvements of 10-20 percentage points are reported. Figure 24 shows
three different prompting applications.
To conclude this overview of reinforcement learning approaches, we discuss an
application in the games domain. Voyager [Wang et al., 2023] is an agent for the
30
-----
Figure 22: Comparison of four prompting strategies: Standard, Chain of Thought (reason), Action only, and ReAct [Yao et al., 2022]
Figure 23: Architecture of Reflexion [Shinn et al., 2024]
Figure 24: Comparison of three application areas [Shinn et al., 2024]
31
-----
Figure 25: Performance of Voyager in Minecraft [Wang et al., 2023]
game of Minecraft that uses an iterative prompting mechanism that generates code for
embodied control. The mechanism includes Self-verification [Shinn et al., 2024]. The
agent has a skill library and an automatic curriculum to maximize exploration. Voyager
interacts with GPT-4 through prompts. The goal of Voyager’s prompts is to discover
as many diverse items in Minecraft as possible, a form of novelty search [Eysenbach
et al., 2018]. Voyager performs well, it shows in-context lifelong learning capability
and reaches high scores by acquiring many tools (see Figure 25).
### 6 Discussion
We have reviewed approaches for prompt-based reasoning by LLMs, highlighting techniques that have achieved a breakthrough in reasoning performance. It is time for reflection on limitations in the approaches, suggesting promising areas of future work.
First we discuss issues concerning hallucination, faithful reasoning, and scaling. Then
we discuss what LLMs can and cannot do. Then, we highlight connections with sequential decision processes and metacognition, and end with a research agenda.
**6.1** **Hallucination, Faithfulness and Scaling**
Most works on reasoning in LLMs are experimental in nature. The success of incontext learning and Chain-of-thought reasoning is attracting the attention of work
providing deeper insight into the reasoning processes in language models.
Saparov and He [2022] introduce a synthetic question/answer dataset designed to
evaluate the reasoning abilities of LLMs. The work showed that LLMs are capable of
32
-----
reasoning to a certain degree, but that Chain-of-thought struggles with proof trees with
a wide branching factor. In another study, Wang et al. [2022a] also aim to increase
our understanding of how Chain-of-thought works. The authors find that it continues
to work even with invalid steps in the reasoning chain. They also find that the order
of the reasoning steps is important for good results. Prompts should be relevant to the
question, and coherent (steps should be in the correct order). Jin et al. [2024] study the
impact of reasoning step length on LLMs, finding a strong positive correlation between
the length of the prompt and reasoning abilities.
These works highlight ways in which LLM-reasoning can see things that are not
there. Next, we discuss works on failure modes of the Chain-of-thought approach,
studying whether the reasoning of the LLM is faithful, or that it gives the right answer
for the wrong reason.
**6.1.1** **Faithfulness**
Chain-of-thought and other approaches prompt a language model to take certain steps
to solve the problem that the prompt specifies. One can ask the question, whether those
steps are indeed the steps that the model has followed (faithful reasoning) or whether it
took another road to arrive at the correct answer (unfaithful reasoning). A few studies
measure the faithfulness of reasoning by LLMs. Lanham et al. [2023] notes that just
like organic reasoners, a model’s reasoning may be post-hoc, it may be constructed
after a certain conclusion has been found. By deliberately adding mistakes to the chain
of thought, the authors measure the faithfulness of the model. They find a wide variation of post-hoc reasoning, with a tendency of larger models to be less faithful. Like
regular LLMs, when not properly grounded, (Chain-of-thought) reasoning suffers from
hallucination.
Another study adds deliberate bias to the prompt. For example, in a multiplechoice setting, they always make answer (a) the correct answer [Turpin et al., 2024].
They find that a bias towards wrong answers can cause significant drops in accuracy,
and that models frequently generate Chain-of-though explanations rationalizing wrong
answers. The authors further note that, insofar as language models are trained on
human-written explanations, that explanations may be incomplete or wrong. Human
explanations may omit crucial steps of the causal chain, may provide an unfaithful account of the human reasoning process, or may be aimed at convincing others, instead
of providing the true causes of a decision.
To address issues of faithfulness, Lyu et al. [2023] propose Faithful-chain-of-thought.
This approach involves two stages. First, the natural language query is translated into
a formal symbolic language. Second, the problem-solving stage processes the formal language, and can explain the reasoning steps it has thus taken. For the symbolic
language, Python, Datalog, or PDDL is suggested. Faithfulness studies tell us more
about how models reason. Further surveys on this topic are Mondorf and Plank [2024],
Chuang et al. [2024], Luo et al. [2023], Paul et al. [2024],
33
-----
**6.1.2** **Scaling**
The emergent abilities of LLMs have prompted research into the nature of scaling and
reasoning with LLMs, and, specifically, how reasoning capabilities can be transferred
to smaller language models. Scaling laws of LLMs are an active area of study, see for
example Kaplan et al. [2020], Henighan et al. [2020], Hoffmann et al. [2022]. Given
the computational cost of LLMs, there is much interest in transferring knowledge to
small language models. Comprehensive surveys on knowledge distillation are Xu et al.
[2024], Gu et al. [2023]. For reasoning specifically, Magister et al. [2022] have studied
reasoning in small language models, using a student model that learns from a teacher
model, by finetuning. Another study related to Self-taught-reasoner [Li et al., 2022a]
focuses on explanation in small language models, achieving similar results.
Other works focus on prompt distillation for retrieval Dai et al. [2022], recommendation [Li et al., 2023], distillation to embodied agents of Chain-of-thought reasoning
[Choi et al.], and distillation of LLM graph reasoning [Zhang et al., 2024]. Distillation of reasoning to smaller models can work surprisingly well in situations with more
explicit instructions. Distillation is also proposed for bringing results of System 2 reasoning to System 1 Yu et al. [2024], which brings us to the topic of metacognition (see
Section 6.2.3).
**6.2** **Limitations: What LLMs Can and Cannot do**
The capabilities of LLMs are impressive. LLMs can be seen as large text-based surrogate models of the world (or the world how we describe it on the internet), and thus
allow us to reason in a way that we can understand about a large variety of contexts and
problems. Reasoning tasks, such as math word problems, were one of the capabilities
that LLMs could not achieve, until recently. Let us look more closely at what language
models can and cannot do.
**6.2.1** **What Can LLMs Do?**
With the right prompt LLMs are able to solve many of the problems in reasoning grade
school math word benchmarks. Prompt-based learning is able to perform reasoning
tasks such as math word problems, robotic movement, and Python code generation, at
inference time, without expensive parameter training.
We note that a simple taxonomy of generate-evaluate-control is able to describe
the structure of the current LLM reasoning literature well. Furthermore, the accuracy
of the reasoning chains can be improved with ensemble methods, or self-verification.
Hallucination can be reduced by grounding the model with external models, such as for
robotic affordances, and information retrieval from search engines and Wikipedia. Going a step further, using external control algorithms (such as search or RL) as scaffolding, dynamic prompts can use the LLMs to perform complex and interactive reasoning
patterns.
Note that the reasoning control is now two layers away from the core LLM: an external control algorithm, on top of in-context-learning, dynamically generating prompts
for the LLM. This is reasoning with prompts with LLMs, not by.
34
-----
At this point, it is interesting to note the confluence of the two schools of classical artificial intelligence (AI), symbolic and connectionist.[5] Search and reinforcement
learning are rooted in the symbolic AI tradition, while LLMs are rooted in the connectionist tradition. The literature in this survey combines the two traditions. High
performance reasoning is created with a (symbolic) searcher/learner on top of a (connectionist) LLM. In other fields similar combinations can be seen (for example, AlphaFold Bryant et al. [2022], Jumper et al. [2021] and retrosynthesis of molecules
Segler et al. [2018]). The LLM helps ground symbolic reasoning methods in language;
symbolic methods help create prompts that let the LLM perform reasoning. How the
two traditions will continue to improve eachother, we will see in further research.
We note that benchmarks such as GSM8K have been central for the progress in the
field, and that while reasoning started with math word problems, the field has extended
to robotics, autonomous agents, games, and most emphatically computer code. Formal
languages play an important role in the intermediate multi-step reasoning chains.
A side effect from the work on reasoning is the emergence of a new few-shot learning approach for sequential decision-making processes (SDP)[Littman, 1996]. Traditionally these processes are solved with reinforcement learning (such as DQN Mnih
et al. [2015], PPO [Schulman et al., 2017] and SAC Haarnoja et al. [2018]), achieving good results, but suffering from high sample complexity for larger problems Plaat
et al. [2023]. The emergence of few-shot in-context learning for solving SDPs opens a
research avenue to find out what SDPs few-shot prompt-learning will be able to solve.
**6.2.2** **What Can LLMs Not Do?**
Now that grade school math word problems are largely solvable, harder reasoning
benchmarks in other domains are appearing [Ahn et al., 2024]. Another line of research argues that LLMs cannot reason, providing examples where LLMs fail, and discussing potential reasons. Berglund et al. [2023] show that LLMs can fail to generalize
in surprising ways. They provide the example that if a model is trained to report that
”Valentina Tereshkova was the first woman to travel to space”, it will not automatically
be able to answer the question, ”Who was the first woman to travel to space?” pointing
to a lack in semantic understanding of LLMs. Other work suggests that results are less
generalizable and transferable than often assumed, showing how base-10 arithmetic
skills do not transfer to base-9 arithmetic problems Wu et al. [2024]. The question
which problems LLMs can and cannot solve will continue to motivate researchers.
Other works study the dangers of the size of LLMs. Bender et al. [2021] mention
the environmental risks associated with the large computational training demands, as
well as the difficulty of understanding the training data, for example in the context of
bias. Furthermore, there are ethical, legal, and copyright concerns regarding the data
that LLMs are trained on. Finally, to prevent putting too much trust in the outcome
5Reasoning and planning have been studied since the start of artificial intelligence, starting with logic
and reasoning [Newell and Simon, 1961], search algorithms in puzzles and board games [Korf, 1999, Plaat,
2020], robot planning [Fikes and Nilsson, 1971], classical machine learning such as decision trees and support vector machines [Flach, 2012, Breiman, 2001, Cortes and Vapnik, 1995], through knowledge representation and the semantic web [Van Harmelen et al., 2008]. Ever since the success of the connectionist
approach LeCun et al. [2015], Goodfellow et al. [2016] (deep learning, including LLMs) researchers have
tried to join the two approaches.
35
-----
of LLMs, we should understand their failure modes better, such as the well-publicized
problems of hallucination (inventing facts that look right but are not).
Most of the reasoning capabilities exhibited by LLMs are due to the great representational powers of the transformer architecture, and how in-context learning is able to
harness them. Prompt engineering and prompt control play a crucial role in the kind of
reasoning that we have seen in the papers. Models can be instructed to write their own
reasoning prompts; however, such Auto-GPT or Auto-CoT prompts need evaluation,
verification, and grounding in the real world, to prevent degeneration into a hallucinatory world of their own. Models can also be instructed to interact with the world,
and become the tool of external scaffolding that evaluates, controls and improves the
prompts. Some of what we experience as reasoning by the LLM, is controlled by the
prompt or the scaffolding algorithm. It is an open question if prompt learning is able
get the LLM to create a prompt to exhibit non-trivial reasoning by itself.
From the symbolic planning field there is also a critical view on the reasoning and
planning abilities of LLMs [Valmeekam et al., 2023] giving examples of planning failures. They argue that LLMs can be used instead to improve heuristic elements of traditional planners, such as PDDL [Kambhampati et al., 2024], to strengthen traditional
symbolic planning approaches.
Some of the names of the approaches surveyed in this paper are suggestive of selfawareness and self-reflective capabilities. True self-reflection, or metacognition, is still
largely outside the capabilities of current LLMs. LLMs can be prompted to reason,
to take small steps, to self-evaluate, and their search process can be controlled by an
external algorithm. The self-reflective type of “intelligence” is written into the prompt
by the prompt engineer or the interactive algorithm. We are unaware of any LLM
that has been made to reflect on, or even control, its reasoning processes, controlling
how many reasoning steps it should take, or limiting its reasoning once the answer had
become good enough. True self-reflection remains future work, although some steps
have been taken, as we will discuss next.
**6.2.3** **Reasoning towards Metacognition**
Human thought exhibits the ability to reason about self, we are able to think about our
own thinking processes. Metacognition studies these topics [Veenman et al., 2006].
Prompted by the success of Chain-of-thought and the works that we have surveyed,
metacognition has also been studied in the context of LLMs [Toy et al., 2024].
Many reasoning approaches highlight self-reflective aspects in their names and in
how they work. The prompts that prompt the models to reason are being improved with
the outcome of the reasoning process, and in Buffer-of-thoughts thought-templates are
used that are derived from other reasoning processes. Wang and Zhao [2023] study
Metacognitive-prompting. Inspired by Chain-of-thought and Self-consistency, they
create manually designed prompts to increase the understanding of language models.
Figure 26 illustrates the relation between metacognitive human thought processes and
metacognitive LLM prompting. Another work, again inspired by Chain-of-thought
and Self-consistency, connects psychology and LLMs. Didolkar et al. [2024] study
metacognitive capabilities of LLMs in mathematical problem solving, both on GSM8K
and on the harder MATH problems [Hendrycks et al., 2021]. First, the model is
36
-----
Figure 26: Metacognitive Prompting and the link with human metacognitive processes
[Wang and Zhao, 2023]
prompted to find a skill name for each problem instance in the dataset. For 7000
instances of GSM8K, 500 skill names were found by the model. Next, these 500
names are clustered down to 22 skills. They find that by using the names of these 22
skills in Chain-of-thought-like prompts, more problems are solved than with standard
Chain-of-Thought/Self-consistency/PAL prompts. Examples of the 22 skill names are
_multiplication-and-addition, basic-arithmetic, subtraction, and algebra. Interestingly,_
the authors find that the skill exemplar repository that is trained on a strong model
(GPT-4), also down-translates to a weak model (GPT-3). The performance of the weak
model benefits from the skill-name-enhanced prompts.
The connection between reasoning in LLMs and full-blown metacognitive reasoning is in its early stages. Exciting future research may appear.
**6.3** **Research Agenda**
At the end of this discussion, we present promising topics for future work. Reasoning
with LLMs is an active field of research. It brings together elements of symbolic reasoning, connectionism, natural language, autonomous agents, and affective reasoning
[Broekens et al., 2023] with the promise of artificial general intelligence.
For the future, the surveyed works point in the following directions. First we discuss topics for the field of LLM-reasoning itself, then we discuss more general machine
learning topics that are important for progress in LLM-reasoning, and finally we discuss more longer term, fundamental topics.
Specific research topics for reasoning with LLMs are:
- Control and prompt-learning—Search control beyond greedy search is implemented as an external algorithm. Is it possible to incorporate all stages of the
37
-----
reasoning pipeline into an interactive prompt? Can we make a prompt that performs dynamic search-like step control without external scaffolding?
- Code—Progress in reasoning using formal languages and computer code has
been quite promising. GitHub Copilot is a success. Further integration of LLMreasoning with software engineering tools is a promising area of research that
can have a large practical impact on how software is written.
- Grounding—Reasoning in LLMs has been successfully applied in autonomous
agents, robotics, and games. A challenge is the grounding of the reasoning process in the environment. How can we help LLMs to actively find new information when the reasoning outcome is uncertain? Is retrieval augmented generation
the future? Is the future of the reasoning-LLM a search engine [Verberne, 2024]?
Generic topics in machine learning that also influence prompt-based reasoning research
are:
- Benchmarks—Progress in LLMs is governed by the availability of the right
benchmarks. The current favorite is GSM8K, for grade school math. As the field
progresses, other benchmarks will become prevalent: benchmarks with more
difficult tasks, and benchmarks for other applications in autonomous agents and
robotics.
- Faithfulness—Our theoretical understanding of prompt-based reasoning with LLMs
is incomplete. The research on faithfulness highlights one example of our lack of
understanding. In general, more insight into the working of multi-step in-context
learning in LLMs is dearly needed.
- Small language models—Efficiency is an important element for wide adoption of
language models. Important topics are distillation of reasoning to small language
models and an understanding of scaling laws.
- Few-shot Reinforcement Learning—Small reasoning problems can be solved
with few-shot in-context learning. Can we solve larger sequential decision processes, reducing the sample complexity in reinforcement learning?
For longer term future work, the following more fundamental questions are important:
- Symbolic and Connectionist Computation—How can we further improve LLMreasoning: how can LLMs benefit from symbolic reasoning prompts and how
can LLMs help ground symbolic reasoning in language?
- Metacognition—Much of the research into reasoning guides the model how it
should solve a problem. Is it helpful to introduce named concepts for different
kinds of reasoning? Can the model find these concepts by itself? Making the
LLM “think” step by step is a first step towards influencing the model’s own
“thought” processes. The first works on LLM metacognition have appeared, and
artificial general intelligence will pursue this further.
38
-----
### 7 Conclusion
Prompt-based in-context learning is an efficient machine learning method, requiring no
parameter updates to the LLM. While achieving good performance on language tasks
(System 1), performance on reasoning tasks (System 2) was lacking. Reasoning tasks,
such as math word problems, are typically solved in step-by-step fashion. Recently
prompts have been developed that guide an LLM to “think step by step” (Chain-ofthought), and to evaluate and verify the step results. The performance of reasoning
with LLMs has improved greatly. Together, the surveyed methods allow the LLM
to follow high-quality multi-step reasoning chains. Python code or other formal languages have been used successfully to reduce the error in reasoning steps. Also, in the
field of autonomous agents and robotic action, good performance has been achieved by
grounding reasoning answers in the environment and the physical constraints of robotic
movement.
For complex reasoning tasks a large number of reasoning steps may be generated.
To control the size of the reasoning space interactively, external scaffolding algorithms
can be used. Often, variations on search algorithms or reinforcement learning are used.
The symbolic and connectionist AI traditions come together in reasoning prompts and
search algorithms that help LLM neural networks solve natural language math word
and related problems.
Among the most popular reasoning benchmarks in this survey is GSM8K, which
contains 8500 grade school math word problems. With LLMs such as GPT-3, reasoning
approaches show an improvement of 20-50% points over standard prompting methods.
For further progress in the field, the development of other challenging benchmarks is
important.
The field of reasoning with LLMs is quite new, and theoretical understanding is
lacking in important areas, such as faithful reasoning (models may sometimes find the
right answer for the wrong reason). Although prompt-based learning allows few-shot
learning, the computational needs of LLMs pretraining and finetuning are still high,
hence the interest in small language models. Reasoning skills that work in large models
can often be transferred to small models.
Human thought is capable of metacognition, we can think about our thinking process. Many of the names of the approaches in this survey suggest a link to metacognition (Reflexion, Self-refine, Self-improvement, Inner-monologue). The first preliminary experiments of language models that reason about their reasoning skills have
appeared.
LLM-reasoning is an active field of research, with connections to artificial general
intelligence. The field has shown great progress. Based on current limitations and open
questions we provide a research agenda highlighting opportunities for further progress
in harder reasoning problems, metacognition, and small language models, amongst
others.
39
-----
### References
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal
Anadkat, et al. Gpt-4 technical report. arXiv preprint arXiv:2303.08774, 2023.
Janice Ahn, Rishu Verma, Renze Lou, Di Liu, Rui Zhang, and Wenpeng Yin. Large language models for mathematical reasoning: Progresses and challenges. arXiv preprint
_arXiv:2402.00157, 2024._
Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron
David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al.
Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint
_arXiv:2204.01691, 2022._
Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. Mathqa: Towards interpretable math word problem solving with
operation-based formalisms. arXiv preprint arXiv:1905.13319, 2019.
Arian Askari, Roxana Petcu, Chuan Meng, Mohammad Aliannejadi, Amin Abolghasemi, Evangelos Kanoulas, and Suzan Verberne. Self-seeding and multi-intent
self-instructing llms for generating intent-aware information-seeking dialogs. arXiv
_preprint arXiv:2402.11633, 2024._
Richard Bellman. Dynamic programming. science, 153(3731):34–37, 1966.
Emily M Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret
Shmitchell. On the dangers of stochastic parrots: Can language models be too big?
In Proceedings of the 2021 ACM conference on fairness, accountability, and trans_parency, pages 610–623, 2021._
Yoshua Bengio, J´erˆome Louradour, Ronan Collobert, and Jason Weston. Curriculum
learning. In Proceedings of the 26th annual international conference on machine
_learning, pages 41–48, 2009._
Lukas Berglund, Meg Tong, Max Kaufmann, Mikita Balesni, Asa Cooper Stickland,
Tomasz Korbak, and Owain Evans. The reversal curse: Llms trained on” a is b” fail
to learn” b is a”. arXiv preprint arXiv:2309.12288, 2023.
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski,
Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr
Nyczyk, et al. Graph of thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI Conference on Artificial Intelligence,
volume 38, pages 17682–17690, 2024.
Leo Breiman. Random forests. Machine learning, 45:5–32, 2001.
Joost Broekens, Bernhard Hilpert, Suzan Verberne, Kim Baraka, Patrick Gebhard, and
Aske Plaat. Fine-grained affective processing capabilities emerging from large language models. In 2023 11th Intl Conf on Affective Computing and Intelligent Inter_action (ACII), pages 1–8. IEEE, 2023._
40
-----
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla
Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al.
Language models are few-shot learners. Advances in neural information processing
_systems, 33:1877–1901, 2020._
Patrick Bryant, Gabriele Pozzati, and Arne Elofsson. Improved prediction of proteinprotein interactions using alphafold2. Nature communications, 13(1):1265, 2022.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira
Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. _arXiv preprint_
_arXiv:2107.03374, 2021._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning
tasks. arXiv preprint arXiv:2211.12588, 2022.
Xinyun Chen, Chen Liang, Adams Wei Yu, Denny Zhou, Dawn Song, and Quoc V
Le. Neural symbolic reader: Scalable integration of distributed and symbolic representations for reading comprehension. In International Conference on Learning
_Representations, 2019._
Xinyun Chen, Maxwell Lin, Nathan Sch¨arli, and Denny Zhou. Teaching large language
models to self-debug. arXiv preprint arXiv:2304.05128, 2023.
Ting-Rui Chiang and Yun-Nung Chen. Semantically-aligned equation generation for
solving and reasoning math word problems. arXiv preprint arXiv:1811.00720, 2018.
Wonje Choi, Woo Kyung Kim, Minjong Yoo, and Honguk Woo. Embodied cot distillation from llm to off-the-shelf agents. In Forty-first International Conference on
_Machine Learning._
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav
Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. Journal of
_Machine Learning Research, 24(240):1–113, 2023._
Yu-Neng Chuang, Guanchu Wang, Chia-Yuan Chang, Ruixiang Tang, Fan Yang,
Mengnan Du, Xuanting Cai, and Xia Hu. Large language models as faithful explainers. arXiv preprint arXiv:2402.04678, 2024.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz
Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al.
Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168,
2021.
Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine learning, 20:
273–297, 1995.
41
-----
Zhuyun Dai, Vincent Y Zhao, Ji Ma, Yi Luan, Jianmo Ni, Jing Lu, Anton Bakalov,
Kelvin Guu, Keith B Hall, and Ming-Wei Chang. Promptagator: Few-shot dense
retrieval from 8 examples. arXiv preprint arXiv:2209.11755, 2022.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pretraining of deep bidirectional transformers for language understanding. _arXiv_
_preprint arXiv:1810.04805, 2018._
Aniket Didolkar, Anirudh Goyal, Nan Rosemary Ke, Siyuan Guo, Michal Valko, Timothy Lillicrap, Danilo Rezende, Yoshua Bengio, Michael Mozer, and Sanjeev Arora.
Metacognitive capabilities of llms: An exploration in mathematical problem solving.
_arXiv preprint arXiv:2405.12205, 2024._
Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun,
Jingjing Xu, and Zhifang Sui. A survey on in-context learning. _arXiv preprint_
_arXiv:2301.00234, 2022._
John Dunlosky and Janet Metcalfe. Metacognition. Sage Publications, 2008.
Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. _arXiv preprint_
_arXiv:1802.06070, 2018._
Richard E Fikes and Nils J Nilsson. Strips: A new approach to the application of
theorem proving to problem solving. Artificial intelligence, 2(3-4):189–208, 1971.
Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for
fast adaptation of deep networks. In International conference on machine learning,
pages 1126–1135. PMLR, 2017.
Peter Flach. Machine learning: the art and science of algorithms that make sense of
_data. Cambridge university press, 2012._
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexitybased prompting for multi-step reasoning. In The Eleventh International Conference
_on Learning Representations, 2022._
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie
Callan, and Graham Neubig. Pal: Program-aided language models. In International
_Conference on Machine Learning, pages 10764–10799. PMLR, 2023._
Louie Giray. Prompt engineering with chatgpt: a guide for academic writers. Annals
_of biomedical engineering, 51(12):2629–2633, 2023._
Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep learning. MIT press,
2016.
Yuxian Gu, Li Dong, Furu Wei, and Minlie Huang. Minillm: Knowledge distillation
of large language models. In The Twelfth International Conference on Learning
_Representations, 2023._
42
-----
Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic:
Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In
_International conference on machine learning, pages 1861–1870. PMLR, 2018._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang,
Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with
the math dataset. arXiv preprint arXiv:2103.03874, 2021.
Tom Henighan, Jared Kaplan, Mor Katz, Mark Chen, Christopher Hesse, Jacob Jackson, Heewoo Jun, Tom B Brown, Prafulla Dhariwal, Scott Gray, et al. Scaling laws
for autoregressive generative modeling. arXiv preprint arXiv:2010.14701, 2020.
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor
Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl,
Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint
_arXiv:2203.15556, 2022._
Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. Metalearning in neural networks: A survey. IEEE transactions on pattern analysis and
_machine intelligence, 44(9):5149–5169, 2021._
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean
Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language
models. arXiv preprint arXiv:2106.09685, 2021.
Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun
Yu, and Jiawei Han. Large language models can self-improve. _arXiv preprint_
_arXiv:2210.11610, 2022a._
Lei Huang, Weijiang Yu, Weitao Ma, Weihong Zhong, Zhangyin Feng, Haotian Wang,
Qianglong Chen, Weihua Peng, Xiaocheng Feng, et al. A survey on hallucination in
large language models: Principles, taxonomy, challenges, and open questions. arXiv
_preprint arXiv:2311.05232, 2023._
Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy
Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint
_arXiv:2207.05608, 2022b._
Mike Huisman, Jan N Van Rijn, and Aske Plaat. A survey of deep meta-learning.
_Artificial Intelligence Review, 54(6):4483–4541, 2021._
Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large language models. arXiv preprint arXiv:2303.05398, 2023.
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, Matthew Tang, Andrew
Howard, Hartwig Adam, and Dmitry Kalenichenko. Quantization and training of
neural networks for efficient integer-arithmetic-only inference. In Proceedings of
_the IEEE conference on computer vision and pattern recognition, pages 2704–2713,_
2018.
43
-----
Mingyu Jin, Qinkai Yu, Haiyan Zhao, Wenyue Hua, Yanda Meng, Yongfeng Zhang,
Mengnan Du, et al. The impact of reasoning step length on large language models.
_arXiv preprint arXiv:2401.04925, 2024._
John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov,
Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Z´[ˇ] ıdek, Anna
Potapenko, et al. Highly accurate protein structure prediction with alphafold. na_ture, 596(7873):583–589, 2021._
Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement
learning: A survey. Journal of artificial intelligence research, 4:237–285, 1996.
Daniel Kahneman. Thinking, fast and slow. macmillan, 2011.
Subbarao Kambhampati, Karthik Valmeekam, Lin Guan, Kaya Stechly, Mudit Verma,
Siddhant Bhambri, Lucas Saldyt, and Anil Murthy. Llms can’t plan, but can help
planning in llm-modulo frameworks. arXiv preprint arXiv:2402.01817, 2024.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws
for neural language models. arXiv preprint arXiv:2001.08361, 2020.
Tom Kocmi, Rachel Bawden, Ondˇrej Bojar, Anton Dvorkovich, Christian Federmann,
Mark Fishel, Thamme Gowda, Yvette Graham, Roman Grundkiewicz, Barry Haddow, et al. Findings of the 2022 conference on machine translation (wmt22). In
_Proceedings of the Seventh Conference on Machine Translation (WMT), pages 1–_
45, 2022.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural informa_tion processing systems, 35:22199–22213, 2022._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. Mawps: A math word problem repository. In 2016 conference of the north
_american chapter of the association for computational linguistics: human language_
_technologies, pages 1152–1157, 2016._
Richard E Korf. Artificial intelligence search algorithms, 1999.
Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion,
et al. Measuring faithfulness in chain-of-thought reasoning. _arXiv preprint_
_arXiv:2307.13702, 2023._
Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):
436–444, 2015.
Lei Li, Yongfeng Zhang, and Li Chen. Prompt distillation for efficient llm-based recommendation. In Proceedings of the 32nd ACM International Conference on Infor_mation and Knowledge Management, pages 1348–1357, 2023._
44
-----
Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong
Wang, Jing Qian, Baolin Peng, Yi Mao, et al. Explanations from large language
models make small reasoners better. arXiv preprint arXiv:2210.06726, 2022a.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu
Chen. On the advance of making language models better reasoners. arXiv preprint
_arXiv:2206.02336, 2022b._
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by
rationale generation: Learning to solve and explain algebraic word problems. arXiv
_preprint arXiv:1705.04146, 2017._
Michael Lederman Littman. Algorithms for sequential decision-making. Brown University, 1996.
Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham
Neubig. Pre-train, prompt, and predict: A systematic survey of prompting methods
in natural language processing. ACM Computing Surveys, 55(9):1–35, 2023.
Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, and Shirui Pan. Reasoning on
graphs: Faithful and interpretable large language model reasoning. arXiv preprint
_arXiv:2310.01061, 2023._
Qing Lyu, Shreya Havaldar, Adam Stein, Li Zhang, Delip Rao, Eric Wong, Marianna
Apidianaki, and Chris Callison-Burch. Faithful chain-of-thought reasoning. arXiv
_preprint arXiv:2301.13379, 2023._
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah
Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Selfrefine: Iterative refinement with self-feedback. _Advances in Neural Information_
_Processing Systems, 36, 2023._
Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, and
Aliaksei Severyn. Teaching small language models to reason. _arXiv preprint_
_arXiv:2212.08410, 2022._
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing english math word problem solvers. _arXiv preprint_
_arXiv:2106.15772, 2021._
Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen,
David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh
Venkatesh, et al. Mixed precision training. arXiv preprint arXiv:1710.03740, 2017.
Shervin Minaee, Tomas Mikolov, Narjes Nikzad, Meysam Chenaghlu, Richard Socher,
Xavier Amatriain, and Jianfeng Gao. Large language models: A survey. arXiv
_preprint arXiv:2402.06196, 2024._
45
-----
Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness,
Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg
Ostrovski, et al. Human-level control through deep reinforcement learning. nature,
518(7540):529–533, 2015.
Philipp Mondorf and Barbara Plank. Beyond accuracy: Evaluating the reasoning behavior of large language models–a survey. arXiv preprint arXiv:2404.01869, 2024.
Shashi Narayan, Shay B Cohen, and Mirella Lapata. Don’t give me the details, just the
summary! topic-aware convolutional neural networks for extreme summarization.
_arXiv preprint arXiv:1808.08745, 2018._
Allen Newell and Herbert A Simon. Computer simulation of human thinking: A theory
of problem solving expressed as a computer program permits simulation of thinking
processes. Science, 134(3495):2011–2017, 1961.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob
Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David
Luan, et al. Show your work: Scratchpads for intermediate computation with language models. arXiv preprint arXiv:2112.00114, 2021.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela
Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in neu_ral information processing systems, 35:27730–27744, 2022._
Denis Paperno, Germ´an Kruszewski, Angeliki Lazaridou, Quan Ngoc Pham, Raffaella
Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fern´andez.
The lambada dataset: Word prediction requiring a broad discourse context. arXiv
_preprint arXiv:1606.06031, 2016._
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method
for automatic evaluation of machine translation. In Proceedings of the 40th annual
_meeting of the Association for Computational Linguistics, pages 311–318, 2002._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve
simple math word problems? arXiv preprint arXiv:2103.07191, 2021.
Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut,
Robert West, and Boi Faltings. Refiner: Reasoning feedback on intermediate representations. arXiv preprint arXiv:2304.01904, 2023.
Debjit Paul, Robert West, Antoine Bosselut, and Boi Faltings. Making reasoning matter: Measuring and improving faithfulness of chain-of-thought reasoning. _arXiv_
_preprint arXiv:2402.13950, 2024._
Aske Plaat. Learning to play: reinforcement learning and games. Springer Nature,
2020.
Aske Plaat. Deep reinforcement learning. Springer, Singapore, 2022.
46
-----
Aske Plaat, Walter Kosters, and Mike Preuss. High-accuracy model-based reinforcement learning, a survey. Artificial Intelligence Review, 56(9):9541–9573, 2023.
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike
Lewis. Measuring and narrowing the compositionality gap in language models.
_arXiv preprint arXiv:2210.03350, 2022._
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever,
et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9,
2019.
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al.
Scaling language models: Methods, analysis & insights from training gopher. arXiv
_preprint arXiv:2112.11446, 2021._
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is
secretly a reward model. Advances in Neural Information Processing Systems, 36,
2024.
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+
questions for machine comprehension of text. arXiv preprint arXiv:1606.05250,
2016.
Bernardino Romera-Paredes, Mohammadamin Barekatain, Alexander Novikov, Matej
Balog, M Pawan Kumar, Emilien Dupont, Francisco JR Ruiz, Jordan S Ellenberg,
Pengming Wang, Omar Fawzi, et al. Mathematical discoveries from program search
with large language models. Nature, 625(7995):468–475, 2024.
Subhro Roy and Dan Roth. Solving general arithmetic word problems. arXiv preprint
_arXiv:1608.01413, 2016._
Omer Sagi and Lior Rokach. Ensemble learning: A survey. Wiley interdisciplinary
_reviews: data mining and knowledge discovery, 8(4):e1249, 2018._
Pranab Sahoo, Ayush Kumar Singh, Sriparna Saha, Vinija Jain, Samrat Mondal, and
Aman Chadha. A systematic survey of prompt engineering in large language models:
Techniques and applications. arXiv preprint arXiv:2402.07927, 2024.
Abulhair Saparov and He He. Language models are greedy reasoners: A systematic
formal analysis of chain-of-thought. arXiv preprint arXiv:2210.01240, 2022.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.
Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Marwin HS Segler, Mike Preuss, and Mark P Waller. Planning chemical syntheses with
deep neural networks and symbolic ai. Nature, 555(7698):604–610, 2018.
Rico Sennrich, Barry Haddow, and Alexandra Birch. Improving neural machine translation models with monolingual data. arXiv preprint arXiv:1511.06709, 2015.
47
-----
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin Jiang, Ming Zhang, and Qun Liu.
Generate & rank: A multi-task framework for math word problems. arXiv preprint
_arXiv:2109.03034, 2021._
Zhengliang Shi, Shen Gao, Xiuyi Chen, Yue Feng, Lingyong Yan, Haibo Shi, Dawei
Yin, Zhumin Chen, Suzan Verberne, and Zhaochun Ren. Chain of tools: Large
language model is an automatic multi-tool learner. arXiv preprint arXiv:2405.16533,
2024a.
Zhengliang Shi, Shen Gao, Xiuyi Chen, Lingyong Yan, Haibo Shi, Dawei Yin, Zhumin
Chen, Pengjie Ren, Suzan Verberne, and Zhaochun Ren. Learning to use tools via
cooperative and interactive agents. arXiv preprint arXiv:2403.03031, 2024b.
Noah Shinn, Federico Cassano, Ashwin Gopinath, Karthik Narasimhan, and Shunyu
Yao. Reflexion: Language agents with verbal reinforcement learning. Advances in
_Neural Information Processing Systems, 36, 2024._
Mohit Shridhar, Lucas Manuelli, and Dieter Fox. Cliport: What and where pathways
for robotic manipulation. In Conference on robot learning, pages 894–906. PMLR,
2022.
Richard S Sutton. Learning to predict by the methods of temporal differences. Machine
_learning, 3:9–44, 1988._
Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. 2018.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv
_preprint arXiv:1811.00937, 2018._
Yiming Tan, Dehai Min, Yu Li, Wenbo Li, Nan Hu, Yongrui Chen, and Guilin Qi. Can
ChatGPT replace traditional KBQA models? An in-depth analysis of the question
answering performance of the GPT LLM family. In International Semantic Web
_Conference, pages 348–367. Springer, 2023._
Yi Tay, Mostafa Dehghani, Vinh Q Tran, Xavier Garcia, Jason Wei, Xuezhi Wang,
Hyung Won Chung, Siamak Shakeri, Dara Bahri, Tal Schuster, et al. Ul2: Unifying
language learning paradigms. arXiv preprint arXiv:2205.05131, 2022.
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha,
Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239, 2022.
Jason Toy, Josh MacAdam, and Phil Tabor. Metacognition is all you need? using
introspection in generative agents to improve goal-directed behavior. arXiv preprint
_arXiv:2401.10910, 2024._
Miles Turpin, Julian Michael, Ethan Perez, and Samuel Bowman. Language models don’t always say what they think: unfaithful explanations in chain-of-thought
prompting. Advances in Neural Information Processing Systems, 36, 2024.
48
-----
Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kambhampati. On the planning abilities of large language models-a critical investigation.
_Advances in Neural Information Processing Systems, 36:75993–76005, 2023._
Frank Van Harmelen, Vladimir Lifschitz, and Bruce Porter. Handbook of knowledge
_representation. Elsevier, 2008._
Niki van Stein and Thomas B¨ack. Llamea: A large language model evolutionary algorithm for automatically generating metaheuristics. arXiv preprint arXiv:2405.20132,
2024.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N
Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in
_neural information processing systems, 30, 2017._
Marcel VJ Veenman, Bernadette HAM Van Hout-Wolters, and Peter Afflerbach.
Metacognition and learning: Conceptual and methodological considerations.
_Metacognition and learning, 1:3–14, 2006._
Suzan Verberne. Is the search engine of the future a chatbot? Inaugural lecture, Leiden
University, 2024.
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R
Bowman. Glue: A multi-task benchmark and analysis platform for natural language
understanding. arXiv preprint arXiv:1804.07461, 2018.
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael,
Felix Hill, Omer Levy, and Samuel Bowman. Superglue: A stickier benchmark for
general-purpose language understanding systems. Advances in neural information
_processing systems, 32, 2019._
Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, and
Huan Sun. Towards understanding chain-of-thought prompting: An empirical study
of what matters. arXiv preprint arXiv:2212.10001, 2022a.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu,
Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with
large language models. arXiv preprint arXiv:2305.16291, 2023.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang,
Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of
thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022b.
Yuqing Wang and Yun Zhao. Metacognitive prompting improves understanding in
large language models. arXiv preprint arXiv:2308.05342, 2023.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud,
Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent
abilities of large language models. arXiv preprint arXiv:2206.07682, 2022a.
49
-----
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V
Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. _Advances in neural information processing systems, 35:24824–_
24837, 2022b.
Sean Welleck, Ximing Lu, Peter West, Faeze Brahman, Tianxiao Shen, Daniel
Khashabi, and Yejin Choi. Generating sequences by learning to self-correct. arXiv
_preprint arXiv:2211.00053, 2022._
Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He, Shengping Liu, Bin Sun, Kang
Liu, and Jun Zhao. Large language models are better reasoners with self-verification.
_arXiv preprint arXiv:2212.09561, 2022._
Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Aky¨urek, Boyuan Chen, Bailin Wang,
Najoung Kim, Jacob Andreas, and Yoon Kim. Reasoning or reciting? exploring the
capabilities and limitations of language models through counterfactual tasks. In 2024
_Conference of the North American Chapter of the Association for Computational_
_Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 1819–_
1862, 2024.
Yisheng Xiao, Lijun Wu, Junliang Guo, Juntao Li, Min Zhang, Tao Qin, and Tie-yan
Liu. A survey on non-autoregressive generation for neural machine translation and
beyond. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023a.
Ziyang Xiao, Dongxiang Zhang, Yangjun Wu, Lilin Xu, Yuan Jessica Wang, Xiongwei
Han, Xiaojin Fu, Tao Zhong, Jia Zeng, Mingli Song, et al. Chain-of-experts: When
llms meet complex operations research problems. In 12th International Conference
_on Learning Representations, 2023b._
Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, James Xu Zhao, Min-Yen Kan, Junxian He,
and Michael Xie. Self-evaluation guided beam search for reasoning. Advances in
_Neural Information Processing Systems, 36, 2024._
Xiaohan Xu, Ming Li, Chongyang Tao, Tao Shen, Reynold Cheng, Jinyang Li, Can
Xu, Dacheng Tao, and Tianyi Zhou. A survey on knowledge distillation of large
language models. arXiv preprint arXiv:2402.13116, 2024.
Ling Yang, Zhaochen Yu, Tianjun Zhang, Shiyi Cao, Minkai Xu, Wentao Zhang,
Joseph E Gonzalez, and Bin Cui. Buffer of thoughts: Thought-augmented reasoning
with large language models. arXiv preprint arXiv:2406.04271, 2024.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and
Yuan Cao. React: Synergizing reasoning and acting in language models. arXiv
_preprint arXiv:2210.03629, 2022._
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and
Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances in Neural Information Processing Systems, 36, 2024.
50
-----
Ping Yu, Jing Xu, Jason Weston, and Ilia Kulikov. Distilling system 2 into system 1.
_arXiv preprint arXiv:2407.06023, 2024._
Eric Zelikman, Jesse Mu, Noah D Goodman, and Yuhuai Tony Wu. Star: Self-taught
reasoner bootstrapping reasoning with reasoning. Advances in Neural Information
_Processing Systems (NeurIPS), 2022._
Yizhuo Zhang, Heng Wang, Shangbin Feng, Zhaoxuan Tan, Xiaochuang Han, Tianxing He, and Yulia Tsvetkov. Can llm graph reasoning generalize beyond pattern
memorization? arXiv preprint arXiv:2406.15992, 2024.
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought
prompting in large language models. arXiv preprint arXiv:2210.03493, 2022.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou,
Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. A survey of large
language models. arXiv preprint arXiv:2303.18223, 2023.
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. Progressivehint prompting improves reasoning in large language models. _arXiv preprint_
_arXiv:2304.09797, 2023._
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-ajudge with mt-bench and chatbot arena. Advances in Neural Information Processing
_Systems, 36, 2024._
Denny Zhou, Nathanael Sch¨arli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang,
Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most
prompting enables complex reasoning in large language models. _arXiv preprint_
_arXiv:2205.10625, 2022._
51
-----
| [
"Aske, Plaat",
"Annie, Wong",
"Suzan, Verberne",
"Joost, Broekens",
"Niki, van Stein",
"Thomas, Back"
] | 2024-07-16T00:00:00 | null | false | 1 | 0 | null | http://arxiv.org/abs/2407.11511 | https://arxiv.org/abs/2407.11511 | https://www.semanticscholar.org/paper/38c61bbb6b8f69395ee4592171f732f9bca3dd5c |
Reinforcement Learning for Guiding the E Theorem Prover | N/A | This work uses reinforcement learn-ing to learn a metaheuristic that decides which heuristic to use at each step of a proof search in the E ATP system, to reduce the number of inferencesteps used in successful proof searches. | null | [
"Jack, McKeown",
"Geoff, Sutcliffe"
] | 2023-05-08T00:00:00 | null | false | 1 | 0 | null | https://journals.flvc.org/FLAIRS/article/view/133334 | null | https://www.semanticscholar.org/paper/55fea35f36f5c5898af9241cb5f7969b93e00f76 |
Reliable Reasoning Beyond Natural Language | Despite their linguistic competence, Large Language models (LLMs) often exhibit limitations in their ability to reason reliably and flexibly. To address this, we propose a neurosymbolic approach that prompts LLMs to extract and encode all relevant information from a problem statement as logical code statements, and then use a logic programming language (Prolog) to conduct the iterative computations of explicit deductive reasoning. Our approach significantly enhances the performance of LLMs on the standard mathematical reasoning benchmark, GSM8k, and the Navigate dataset from the BIG-bench dataset. Additionally, we introduce a novel dataset, the Non-Linear Reasoning (NLR) dataset, consisting of 55 unique word problems that target the shortcomings of the next token prediction paradigm of LLMs and require complex non-linear reasoning but only basic arithmetic skills to solve. Our findings demonstrate that the integration of Prolog enables LLMs to achieve high performance on the NLR dataset, which even the most advanced language models (including GPT4) fail to solve using text only. | This work proposes a neurosymbolic approach that prompts LLMs to extract and encode all relevant information from a problem statement as logical code statements, and then uses a logic programming language (Prolog) to conduct the iterative computations of explicit deductive reasoning. | ## Reliable Reasoning Beyond Natural Language
**Nasim Borazjanizadeh** **Steven T. Piantadosi**
University of California, Berkeley Department of Psychology, UC Berkeley
**Abstract**
Despite their linguistic competence, Large Language models (LLMs) often exhibit
limitations in their ability to reason reliably and flexibly. To address this, we
propose a neurosymbolic approach that prompts LLMs to extract and encode all
relevant information from a problem statement as logical code statements, and then
use a logic programming language (Prolog) to conduct the iterative computations of
explicit deductive reasoning. Our approach significantly enhances the performance
of LLMs on the standard mathematical reasoning benchmark, GSM8k, and the
Navigate dataset from the BIG-bench dataset. Additionally, we introduce a novel
dataset, the Non-Linear Reasoning (NLR) dataset, consisting of 55 unique word
problems that target the shortcomings of the next token prediction paradigm of
LLMs and require complex non-linear reasoning but only basic arithmetic skills
to solve. Our findings demonstrate that the integration of Prolog enables LLMs
to achieve high performance on the NLR dataset, which even the most advanced
language models (including GPT4) fail to solve using text only.
**Introduction**
The recent emergence of large language models (LLMs) [4, 26, 25, 9, 8, 30, 35, 36] has revolutionized
the field of Natural Language Processing (NLP), with LLMs demonstrating human-level performance
across various professional and academic benchmarks [27] and exhibiting an excellent understanding
of linguistic rules and patterns [20].
However, despite their linguistic competence, LLMs often demonstrate significant limitations in
their capacity to reason reliably and flexibly [20, 13, 40]. These limitations likely stem from the
autoregressive architecture of transformers, which enforces the solution to the problems sequentially:
the models’ reliance on a greedy process for predicting the next word constrains their backtracking
and error recovery capability [13]. Models are expected to generate an answer in a single pass of their
feedforward architecture, which cannot implement conditional loops [5]. Moreover, the statistical
nature of LLMs’ training and representation means they often fail in generalizing appropriately to
problems outside their training distribution, especially in settings requiring reasoning and discrete
processes [41]. Furthermore, even the most advanced LLMs, including GPT4, have an incredibly short
working memory[5], while reliable reasoning requires accurate and robust retrieval and integration of
all relevant information.
Additionally, the linear and sequential nature of natural language contrasts with the complex and
non-linear computations often involved in deductive reasoning. Even humans struggle with reasoning
tasks when the brainstorming medium is confined to text. This is well illustrated by the history of
logic. Aristotle’s writing on syllogistic reasoning, for example, lacked the tools of symbolic logic
later developed for this kind of argumentation. The result is clunky and difficult to follow, even when
correct:
If A has been proved to all or to some B, then B must belong to some A: and if
A has been proved to belong to no B, then B belongs to no A. This is a different
Preprint. Under review.
-----
**Problem: There are 4 brothers, John, Blake, Sam, and Frank. In three years, Sam will be twice as old as John, and**
Blake’s age will be equal to the age difference between Sam and Frank. In eight years, John's age would be twice
Blake's age, and John would be 2 years younger than Frank’s current age. Also, the sum of Sam’s and Blake’s ages is 5
years less than the sum of John’s and Frank’s ages. What is the sum of the 4 brother’s ages right now? Input
Prolog prompt with chain 63
of thought reasoning in
text and code Output
problem(Sum_4_brothers_ages):{% In three years, Sam will be twice as old as John
Sam_current_age + 3 = 2 * (John_current_age + 3),
% In three years, Blake’s age will be equal to the Code executed
age difference between Sam and Frank Prolog
LLM Blake_current_age + 3 = abs(Sam_current_age + 3 Interpreter
- (Frank_current_age + 3)),
**…** LLM Generation 1
…
…
problem(Sum_of_brothers_ages):- …
{S + 3 = 2 * (J + 3),% In three years, Sam will be twice as old as John Code was not
% Blake’s age will be equal to the age difference executed
between Sam and Frank
B = S - F,
% In eight years, John's age would be twice Blake's age
J + 8 = 2 * (B + 8),
% John would be 2 years younger than Frank’s current
age
J = F - 2, … LLM Generation 2 Multiple-Try
Inference Algorithm
Figure 1: Our approach: A natural language problem (a math word problem from NLR dataset) is
given to an LLM, which is prompted to perform CoT in text and logical code to encode the variable
relationship as logical code statements. The Prolog interpreter executes the code. If the Prolog
program fails, the LLM is re-prompted until valid code is generated or a limit of attempts is reached.
conclusion from the former. But if A does not belong to some B, it is not necessary
that B should not belong to some A: for it may possibly belong to all A.
Others like Venn [37] and Boole [3] developed systems which allowed such reasoning to take place
in a different medium—symbolic diagrams and algebraic equations respectively. These tools support
reasoning about much richer types of logical relationships and deductive logic, than could be easily
conveyed in natural language. Jevons [18] even developed a mechanical system for such logical
reasoning, much in the spirit of Babbage’s work (see Gardner [15]). More generally, the principles
and notation of mathematics allow us to concisely express concepts that would be incredibly difficult
to express in natural language alone. Formalizing of reasoning in a system other than natural language
has several descendants, from the General Problem Solver of Newell et al. [22], to logic programming
languages like Prolog [12], and formal tools for robust verification like Lean [21]. Natural language
is not enough for any of these domains.
**2** **Our Approach**
To enable LLMs to perform deductive reasoning robustly, we propose integrating a reliable, deductive
reasoning module into their inference pipeline. Specifically, in this study, we prompt the model to
encode the constraints and relationships among variables, as described in the problem statement, as a
set of Prolog code statements. The generated code is then evaluated by Prolog, which uses deductive
approach, to derive a deterministic answer to the problem (Figure 1). This not only has the advantage
of mirroring the likley human architecture of separate linguistic and reasoning systems [14, 20], but
as we show, significantly improves the performance of LLMs in mathematical reasoning.
Indeed, this approach draws on the strengths of both symbolic and neural systems. Though systems
like Prolog support reliable deduction, they have no mechanism to deal with the complexities and
intricacies of natural language descriptions of problems. Moreover, they are unable to perform
_implicit reasoning, which involves extracting information that is not explicitly stated in the text but is_
-----
rather implied through common sense assumptions and context. However, Prolog and related systems
excel at reasoning, with the ability to incorporate an arbitrary number of facts in their deductive
processes, only generating valid conclusions given their assumptions. Prolog expresses knowledge as
a set of relations, facts, and rules, and uses a reasoning engine to run queries over these relations,
applying rules through resolution until a solution is found or all possibilities are exhausted. The
ability to backtrack, conduct comprehensive searches, and accurately store and retrieve an arbitrary
number of rules and relations are the capabilities that are difficult to implement using the feedforward
architecture of LLMs, but essential for accurate deductive reasoning.
Moreover, in contrast to procedural or functional programming, declarative programming paradigm of
Prolog focuses on defining what to execute and the program logic rather than specifying the detailed
control flow. When LLMs are prompted to generate logical code to solve a problem, this declarative
nature reduces the load on the LLM to define the variables or constraints encoded in the problem in
the correct order or generate all intermediate steps of the computation correctly, allowing for a more
direct mapping of the information encoded in natural language statements to logical code.
Two specific design choices help this approach work well. First, we prompt the LLM to perform
**Chain of Thought (CoT) [39] reasoning in text and logical code. This in-context learning method**
involves the integration of natural language comments that walk through the implicit reasoning steps
required to arrive at the intermediate variables and code statements. While the code statements
encode the explicit constraints and declarative arithmetic statements that the Prolog interpreter needs
to compile. This technique allows the model to reason through the information implied by the context
of problem statements and common sense but not explicitly stated (see e.g. Table 2). Second, we use
the Multiple Try inference algorithm to obtain the models’ logical code generation for the problems.
Using this inference method, if the Prolog code, generated by the LLM, fails to execute successfully[1],
we rerun the model with a slightly increased temperature (with a preset maximum number of attempts)
and return the numerical answer returned by the model’s first executable code generation (in contrast
to, e.g., majority-vote schemes [38]). This approach helps to mitigate the brittleness of symbolic
programming code.
We also introduce a novel dataset, the Non-Linear reasoning dataset (NLR) dataset[2], which is
designed to evaluate the generalizability of LLMs’ mathematical reasoning capabilities. Motivated
by the corruption of test and training sets for many mathematical tasks [27], and the simple and
repetitive pattern of reasoning required to solve the problems of the current reasoning benchmarks
[11, 32, 28, 10, 19], we present a new dataset that (i) is certainly outside of current models’ training
sets, and (ii) each problem necessitates a unique and creative reasoning pattern to solve, while the
mathematical skills needed are limited to basic arithmetic and algebra. This benchmark consists of
unique constraint problems, math word problems, and problems that require following algorithmic
instructions for updating a game model (see e.g. Table 1). We demonstrate that the most advanced
LLMs, including GPT4, struggle to solve these problems when prompted to solve them step by step,
utilizing a chain of thought text prompt, despite their success on other mathematical tasks [27, 5].
**3** **Comparison to Other Approaches**
Several studies have explored the integration of LLMs with external tools and symbolic reasoning
modules [24, 23, 11, 31]. For instance, training LLMs to make API calls to tools like calculators,
interpreters, or external datasets has been shown to improve their performance across a variety of
reasoning tasks [11, 31, 29]. While these methods have successfully reduce the arithmetic errors
of LLMs, they do not sufficiently address the reasoning limitations inherent to the next-token
prediction paradigm of LLMs and the linear nature of text, which can restrict the ability to perform
comprehensive searches over the space of possibilities, explore multiple pathways to a solution, or
backtrack.
Our approach builds upon and extends the work of LINC [24] and Nye et al. [23]. LINC uses a
neurosymbolic process to convert natural language into first-order logic expressions with LLMs to
determine the truth value of conclusions via a symbolic theorem prover. This method has shown
significant performance gains on the FOLIO [16] and ProofWriter [34] datasets compared to CoT
1This primarily occurs due to variable name assignment errors, as data flows are described without mutability
in Prolog’s declarative syntax, unlike procedural programming.
[2Link to NLR dataset](https://anonymous.4open.science/api/repo/Reasoning-Beyond-NL-D604/file/NLR_dataset.json?v=6b2378b7)
-----
Table 1: Examples of each problem category in the NLR dataset
|NLR Problem Statement|Characteristics|
|---|---|
|Math Word Problem: When I was half my current age, my father was 30. When I was 1/3 my current age, my mother was 25. And when I was 1/6 of my current age, my sister was 7. If the sum of my age, my sister’s age, my father’s age, and my mother’s age is 116, then how old am I now?|4 entangled variables in the problem|
|Constraint Satisfaction: In a line to enter a cinema, 4 people are standing between Bob and Alex. Chad’s index in the line is 1 after Bob’s, he’s standing right behind Bob considering the order of people left to right. Frank is right behind Alex. Sam is right in front of Bob. There are 2 people between Sam and Frank. If Bob is in the 7th person in the line, counting left to right, what is the number of Alex?|2 constraints encoding multiple possibilities|
|Algorithmic Instructions: There’s a cinema with 12 seats organized in 3 rows and 4 columns. Due to covid there’s a policy that a seat can be filled only if none of the seats right next to it in the same column or the same row are not filled. If we place a person in the seat in the second column of the first row and then start to fill the seats left to right, row by row, starting row with 1, how many people can be seated in the cinema in total?|5 entangled variables in each state|
prompting. However, it has a limitation in capturing implicit information not explicitly stated in the
premises, as it primarily uses LLMs as a semantic parser, translating each natural language premise
directly into a logical statement [24]. Similarly, Nye et al. [23] improves the performance of LLMs in
story generation and instruction-following tasks by using a symbolic reasoning module to check the
logical consistency of generated text against a minimal world model. This method increases accuracy
and robustness of neural generation but is limited by the need for hand-crafting the world model and
defining specific constraints.
In our approach, the world model is constructed by the LLM itself, with no limitations on the number
of constraints that can be encoded in the problem. Moreover, rather than using LLMs as semantic
parsers or text-to-logical code translators, we prompt the LLM to perform chain of thought (CoT)
reasoning in both text and logical code, prompting the LLM to conduct implicit reasoning and use
additional tokens as working memory to derive intermediate variables. This enables a more flexible
and generalizable reasoning process, making our neurosymbolic approach applicable to a wider
variety of problems.
Our approach is also similar to the ’Program of Thought’ (PoT) method [6], which separates
computation from reasoning and language understanding. In PoT, LLMs generate text (as comments)
and programming language statements to solve problems, delegating computation to a program
interpreter. However, PoT’s code statements often directly translate the comments. In contrast, our
in-context prompts use comments to walk through implicit reasoning steps, making them an integral
part of the CoT. This allows the comments to encode different reasoning from the code statements,
extracting necessary but unstated information to generate the correct logical code. (see Table 2 for
examples of implicit reasoning performed by GPT4 in the comments).
**4** **Experiments**
We first present results of our approach on two existing datasets: the standard mathematical reasoning
dataset, GSM8k, and the Navigate task, extracted from the BIG-bench benchmark. In our experiments,
we compare our approach against the standard prompting method used for reasoning tasks: examples
solved using Chain of Thought (CoT) reasoning in text.
**GSM8k [11] is a widely used benchmark for mathematical reasoning tasks, comprising of elementary**
school math problems. We tested the performance of our approach using GPT4 and GPT3.5 Turbo
(hereafter GPT3.5), and text-davinci-003 models on the GSM8k dataset. To construct the Prolog
prompt, we selected eight problems from the first 25 problems in the shuffled test split of the dataset.
This selection was made to ensure that the prompt examples covered a variety of difficulty levels
(The GSM8k dataset does not provide a difficulty score for the problems). The variable names in the
Prolog prompts include a brief description and the unit of the variable in order to provide additional
-----
Table 2: Comparison of GPT4’s CoT reasoning in text vs in text and logical code for a constraint
|Problem: I am a 4 digit number. My rightmost digit is not divisible by 2. The sum of my digits is 20, and all my digits are in strictly decreasing order from left to right. One of my digits is 4 times one of my other digits, and the difference between my 2 middle digits is more than 3. What number am I? Answer: 9821|Col2|
|---|---|
|problem(Number):- % The number is a 4 digit number. We can write it in terms of its digits: Number #= 1000 * Digit4 + 100 * Digit3 + 10 * Digit2 + Digit1, % Each of the digits of the number are single digit numbers in range [0, 9] Digit1 #>= 0, Digit1 #< 10, Digit2 #>= 0, Digit2 #< 10, Digit3 #>= 0, Digit3 #< 10, % ** Besides the leftmost digit which can not be 0 because otherwise the number would be a 3 digit number. ** The leftmost number must be in range [1, 9] Digit4 #> 0, Digit4 #< 10, % The rightmost digit is not divisible by 2 Digit1 mod 2 #\= 0, % The sum of the digits is 20 Digit1 + Digit2 + Digit3 + Digit4 #= 20, % All the digits are in strictly decreasing order from left to right Digit4 #> Digit3, Digit3 #> Digit2, Digit2 #> Digit1, % One of the digits is 4 times one of the other digits. ** There are 12 possibilities for this condition, as there are 12 unique digit pairs in this number **: (4 * Digit1 #= Digit2; 4 * Digit1 #= Digit3; 4 * Digit1 #= Digit4;4 * Digit2 #= Digit1;4 * Digit2 #= Digit3; 4 * Digit2 #= Digit4; 4 * Digit3 #= Digit1; 4 * Digit3 #= Digit2;4 * Digit3 #= Digit4;4 * Digit4 #= Digit1; 4 * Digit4 #= Digit2; 4 * Digit4 #= Digit3), % The difference between the 2 middle digits is more than 3 abs(Digit3 - Digit2) #> 3.|Let’s denote the four digits of the number as a, b, c, and d (from left to right). From the problem, we know that: 1. d is not divisible by 2, which means d is an odd number. The possible values for d are 1, 3, 5, 7, 9. 2. a + b + c + d = 20, which means the sum of the four digits is 20. 3. The digits are in strictly decreasing order from left to right, which means a > b > c > d. 4. One of the digits is 4 times one of the other digits. 5. The difference between the two middle digits (b and c) is more than 3, which means |b - c| > 3. Let’s start solving the problem: Since the digits are in strictly decreasing order and the sum of the digits is 20, a must be at least 9. If a is 9, then b + c + d = 11. Since b > c > d and d is odd, the maximum value for d is 5 (because if d is 7 or 9, then b + c would be less than 4, which is not possible because b and c must be different and greater than d). So, if d is 5, then b + c = 6. The only possible values for b and c are 4 and 2, but the difference between b and c is not more than 3. So, d cannot be 5. If d is 3, then b + c = 8. The possible values for b and c are 5 and 3, but the difference between b and c is not more than 3. So, d cannot be 3. If d is 1, then b + c = 10. The possible values for b and c are 7 and 3, and the difference between b and c is more than 3. Also, one of the digits is 4 times one of the other digits (3 is 4 times 1). So, the number is 9731.|
satisfaction problem in the NLR dataset. Red text indicates incorrect reasoning steps. **, added
manually, encloses implicit reasoning steps in the comments. In the above example compiling the
Prolog code generated by the model results in the correct answer.
information about feasibility of an operation between two variables. We employed the CLP(FD)
library [1] of SWI-Prolog to write the Prolog code for the constraint satisfaction prompts. For the
math word problems and algorithmic instructions prompts, we used the CLP(R) library [2]. These
libraries enable performing declarative arithmetic and solving systems of linear equations in Prolog.
Figure 2.a demonstrates that integrating Prolog with text-davinci-003 and GPT3.5 models significantly
improved their performance on GSM8k, highlighting the benefit of incorporating a reliable reasoning
module in the inference pipeline of LLMs. The declarative nature of Prolog facilitates outlining the
program logic, thus reducing the load on the LLM by eliminating the need to specify the control flow,
order of the operations, or accurately generate all intermediate computational steps. The performance
of GPT4 also showed considerable improvement with the integration of Prolog. It’s important to note
-----
Figure 2: Comparing single model accuracy on GSM8k and Navigate benchmarks using text-only
CoT versus our neurosymbolic approach (GPT3.5 + Prolog, GPT4 + Prolog), using CoT in text and
logical code and the multiple-try inference algorithm. Few-shot CoT in text baselines on GSM8k
references are Bubeck et al. [5] and OpenAI [27]
Figure 3: Comparing single model accuracy of LLMs (GPT3.5, GPT4) on the NLR dataset when
prompted with text-only CoT versus our neurosymbolic approach (GPT3.5 + Prolog, GPT4 + Prolog),
using CoT in text and logical code and the multiple-try inference algorithm.
that GPT4 is trained on the GSM8k training dataset during its pre-training phase [27], leading to its
higher performance when prompted with CoT in text.
The Navigate task is a component of the BIG-bench benchmark, a collection of tasks designed to
evaluate the language understanding and generation capabilities of LLMs [33]. This task specifically
assesses the LLMs’ ability to solve a simple spatial reasoning task, involving tracking an agent’s
location based on instructions detailing number of steps and the direction. The task requires iteratively
updating a world model where each state has a few variables, the x and y coordinates and the
directions the agent is facing.[3] Navigate, like many other benchmark datasets such as RuleTaker
[10], is systematically generated. The problem statements are constructed by sampling a combination
of instructions, among a poll of nine instructions, with a random number of steps added to each
instruction. Originally, the Navigate task requires determining whether the agent returns to its starting
location, inherently introducing a 50% chance of correctness due to the binary nature of the answer.
To alleviate this, we revised the task to ask the final distance of agent from the start. To construct the
Prolog prompt, we used two examples of each of the two types of problems in this dataset. Since
there is no reported text-based baseline for the GPT models for the Navigate task, we prompted the
models with CoT in text to establish the baseline performance for this task. For comparability, we
constructed a four-shot CoT in text prompt with the same four problems that were used to build the
Prolog prompt.
Figure 2.b demonstrates a performance exceeding 98% for all three models integrated with Prolog
on the Navigate task, which is a significant improvement compared to the performance of models
prompted with CoT in text. This data suggests that the integration of a symbolic reasoning module
can help prevent arithmetic errors in LLMs and assist with the models’ limited working memory
when tracking and updating variables of the world model states. Notably, the davinci-text-003 and
GPT3.5 models, which have a more restricted working memory relative to GPT4, showed the most
significant improvements.
3We selected Navigate for our experiments among other BIG-bench tasks due to the fact that PaLM 540B
and PaLM 62B’s performance on the task were significantly below the best human performance score [7].
-----
Figure 4: Evaluating the robustness and performance variability of our best model, GPT4 + Prolog,
by running it 25 times on each NLR problem and recording the accuracy and average number of
attempts it took to generate a valid code using the multiple-try inference algorithm. Math Word
Problems are abbreviated as ’MWP’, Constraint Satisfaction as ’CS’, and Algorithmic Instruction as
’AI’.
**5** **NLR Dataset**
We next introduce a new dataset, the Non-Linear Reasoning dataset (NLR), a collection of 55
problems hand-designed to evaluate the generality of LLMs’ mathematical reasoning capabilities
to out-of-distribution problems requiring complex, non-linear reasoning. The NLR dataset contains
three categories of problems: (i) Math word problems (21 problems), (ii) Constraint satisfactions
problems (17 problems), and (iii) Algorithmic instructions for updating a game model (17 problems).
Unlike other reasoning datasets that are syntactically generated, where the problem statements lack
diversity in the wording, and premises and conclusions are expressed in short, direct statements
[16, 10], our problems necessitate richer natural language understanding and both implicit and explicit
deductive reasoning for resolution (see e.g. Table 2). The problems are also designed to disrupt the
statistical co-occurrence patterns of words, by drawing inspiration from popular logic problems, but
uniquely modifying the rules/constraints of the problems.
In contrast to the MATH dataset [17], which requires advanced mathematical skills such as calculating
integrals and eigenvalues, the NLR problems only require basic arithmetic. However, despite the
simplicity of the required math operations, LLMs struggle to solve NLR problems end to end (as
shown in Figure 3). This is likely due to the high interrelationship between variables of the problem
and the requirement to store and backtrack to intermediate states and trace multiple possible paths to
the solution, which is challenging to perform using LLM’s next word prediction paradigm.
To better understand the performance of the models on the NLR dataset, we introduce a notion
of “entanglement” between variables in a problem. Variables are entangled when the value of a
variable must be inferred through its relationship with other variables and modifying the value of the
variable affects the value of the variables entangled with it. For instance, in the math word problem
presented in Table 1, each person’s age is defined in terms of other people’s age, rather than by a
specific numerical value. Consequently, the system of equations encoding the relationships of the
variables require a nonlinear computational process for resolution, which involves an iterative process
of simplifying and substituting more complex linear equations. The algorithmic instruction problem
in Table 1 also exhibit variable entanglement. Seating a person in the cinema affects the availability
of the 4 adjacent seats. Hence, the seat value is entangled with the values of the 4 neighboring seats.
Consequently, all variables in the game model’s current state must be updated based on the problem’s
conditionals and update rules after each action. This significantly challenges the model’s working
memory.
-----
Figure 5: Comparing single model accuracy of GPT4 using a text-only CoT prompt versus our
neurosymbolic approach on variations of a subset of NLR problems with 0-4 entangled variables.
To illustrate the significant impact of variable entanglement on the model’s performance, we designed
instances with similar structure and reasoning pattern but varying numbers of entangled variables for
five math word problems and five algorithmic instruction problems in the NLR dataset[45]. Figure 5
shows that the capacity of GPT4 to solve the problems end-to-end drastically declines as the number
of entangled variables in the problem increases, with the model failing to solve any of the problems
that have four entangled variables when prompted with the standard CoT in text. In contrast, using
our neurosymbolic approach, the model maintains a consistently strong performance across all of the
problems.
We conducted two more series of experiments on the NLR dataset, using GPT3.5 Turbo and GPT4
6. The first compares the mean performance of our model against the text-only CoT promting
baseline across all problems (Figure 3). The second assesses our model’s robustness and performance
variability by running it 25 times on each NLR problem, recording the accuracy and the average
number of attempts needed to produce valid code for each problem using the multiple-try inference
algorithm (Figure 4). We used a five-shot prompt of CoT in text and logical code, and a five-shot
text-only prompt for each problem category. The text-only prompts were generated by initially
instructing GPT-4 to solve the problems step-by-step, 0-shot, and then debugging the solutions. We
conducted inference on a total of 40 problems (16 math word problems, 12 constraint satisfaction
problems, and 12 algorithmic instructions problems), using the multiple-try algorithm for the Prolog
augmented inference pipeline.
**(i) - Math Word Problems With Variable Entanglement**
As shown in Figure 3.a, the simple modification of defining variables in relation to other variables
significantly impacts the end-to-end performance of LLMs in solving math word problems. While
GPT4 is generally successful in extracting the equations representing the relationship between the
entangled variables from the textual information, it struggles to solve the resulting system of linear
equations when prompted with CoT in text (contrasting strongly with its almost perfect performance
on the GSM8k dataset). This suggests that while LLMs are adept at understanding the semantic
meaning of problem statements, they struggle with the non-linear computations of solving a system
of linear equations, even though the mathematical scope of operations is limited to simple algebra.
As evidenced by the 100% accuracy we obtained by integrating GPT4 with Prolog, prompting the
model to define the variable relationships as equalities in a Prolog predicate and utilizing Prolog’s
declarative arithmetic to solve the resulting system of equations, eliminates the limitation on the
number of entangled variables (Figure 5) and allows the models to solve these problems robustly.
This is evidenced by our second experiment, (Figure 4), where our model successfully solved nearly
all math word problems in all 25 attempts, with very few inference attempts.
[4Link to the NLR dataset variation with varying number of entangled variables](https://anonymous.4open.science/api/repo/Reasoning-Beyond-NL-D604/file/NLR_dataset_var_entanglement.json?v=5119db94)
5Algorithmic instructions and math word problems in the original NLR dataset have 3-5 entangled variables.
6The API to the text-davinci-003 model was disabled at the time we ran the experiments on the NLR dataset
-----
**(ii) - Constraint Satisfaction Problems**
Figure 3.b demonstrates that the LLMs mostly fail to solve the NLR constraint satisfaction problems
when prompted with CoT in text. The models often overlook some possibilities, hallucinate about
whether a possibility satisfies all constraints, or make illogical leaps in reasoning which in turn results
in a 8% success rate in solving these problems. For instance, consider the constraint satisfaction
problem presented in Table 1, which states that there are four people between Bob and Alex in a
line. The LLMs only consider the possibility where Bob is standing in the ith index and Alex in
(i + 5)th index of the line, which is one of the two possible orders of Bob and Alex. The existence of
constraints that encode multiple possibilities makes these problems particularly difficult for LLMs
due to the extensive non-linear reasoning required to trace and revisit all potential solutions.
However, as demonstrated by Figure 3, prompting GPT4 to formulate the information encoded in
the problems as Prolog predicates, where the model’s task is to encode the constraints as logical
code statements rather than attempting to iteratively check the possible states against the constraints,
effectively addresses these issues. This approach resulted in a 100% success rate in solving the
problems. GPT3.5, on the other hand, succeeds in correctly encoding the constraints as logical code
in Prolog half of the time, which demonstrates that GPT3.5 is less proficient than GPT4 in inferring
the information implied by the natural language statements of the problems.
**(iii) - Algorithmic Instructions for Updating a Game Model**
The algorithmic instructions problems in the NLR dataset are designed to evaluate the ability of
LLMs to implement the algorithm described in the problem statement to track and update the states of
a world model. Similar to the Navigate task, these problems provide a deterministic set of instructions
for updating the given initial state of the world, leading to a guaranteed feasible end state. Thus, in
solving these problems, the LLMs are not required to make decisions about the choice or order of
actions. However, in contrast to the Navigate task, where the problems are systematically generated
instances of the same task, each problem in the NLR dataset defines a new task with unique rules and
reasoning.
As shown in Figure 3.3, GPT4 solved 66.7% of the problems with the integration of Prolog, a
significant improvement compared to the 8% success rate achieved by the model when solving these
problems using text only. When tasked to solve the problems end-to-end, the models struggle to
accurately store and retrieve the intermediate states, which are typically lists of length 10, often omit
numbers when rewriting the updated list, fail to consider all of the conditionals in the given algorithm
when updating a state, or fail to apply the algorithm for the correct number of steps.
Figure 4 illustrates the correlation between the model’s robustness, accuracy, and the reasoning
required to map the information encoded by the problem statements into Prolog code. Despite notable
improvement over the text-only baseline in solving algorithmic instructions problems, our model
is least stable in solving this category, both in terms of accuracy and average number of inference
attempts made before the model generated a valid code. This is anticipated as these tasks mainly
involve modifying a game state, which does not necessitate deductive reasoning, rendering Prolog’s
reasoning engine less beneficial, and placing more load on the LLM to write the logical code that
initializes and updates a data structure that represents the world model.
**6** **Limitations and Broader Impact**
Our work, which is on the path to creating models that can reason generally, reliably, and correctly
like humans, can lead to several positive societal impacts. Such models can significantly increase
the reliability and usefulness of current AI. In turn, improving the reasoning capabilities of language
models could lead to job displacement as they can more reliably automate complex tasks traditionally
performed by humans.
The main limitation of the NLR dataset is scalability. Designing tasks that require unique reasoning
patterns for resolution, math word problems with high variable entanglement, constraint satisfaction
problems with constraints encoding multiple possibilities, and game algorithms with new rules is
complex and time consuming. Moreover, LLMs may produce coherent but incorrect logical code
solutions, making error detection challanging. Additionally, Prolog’s limited infrastructure support
-----
for complex data structures, compared to languages like Python, may restrict its applicability to
problems involving higher-dimensional data.
**7** **Conclusion**
This work highlights inherent limitations of LLMs in performing reliable and generalizable reasoning,
and suggests these problems can be mitigated by integrating a symbolic reasoning system into their
inference pipeline. Our neurosymbolic approach prompts the LLM to map the information encoded
by problem statements to logical code statements, thereby delegating the iterative computations
of explicit deductive reasoning to a reliable reasoning engine. This division of labor significantly
enhances the performance of LLMs on mathematical reasoning tasks. Additionally, our novel NLR
dataset provides a robust benchmark for evaluating the generality of LLMs’ mathematical reasoning
capabilities to problems that require unique nonlinear reasoning and challenge the limitations of the
linear next word prediction paradigm of LLMs.
**References**
[1] Clp(fd): Constraint logic programming over finite domains, . [URL https://www.](https://www.swi-prolog.org/man/clpfd.html)
```
swi-prolog.org/man/clpfd.html.
```
[[2] library(clpqr): Constraint logic programming over rationals and reals, . URL https://www.](https://www.swi-prolog.org/man/clpqr.html)
```
swi-prolog.org/man/clpqr.html.
```
[3] George Boole. The laws of thought, volume 2. Open court publishing Company, 1854.
[4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
[5] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece
Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general
intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
[6] Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv
_preprint arXiv:2211.12588, 2022._
[7] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker
Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes,
Yi Tay, Noam Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson,
Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin,
Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier
Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David
Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani
Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat,
Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei
Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei,
Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling
language modeling with pathways, 2022.
[8] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh,
Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay,
Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C. Hutchinson,
Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin,
Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier
García, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David
Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani
Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat,
-----
Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei
Zhou, Xuezhi Wang, Brennan Saeta, Mark Díaz, Orhan Firat, Michele Catasta, Jason Wei,
Kathleen S. Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm:
Scaling language modeling with pathways. J. Mach. Learn. Res., 24:240:1–240:113, 2022.
[URL https://api.semanticscholar.org/CorpusID:247951931.](https://api.semanticscholar.org/CorpusID:247951931)
[9] Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Yunxuan
Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned
language models. Journal of Machine Learning Research, 25(70):1–53, 2024.
[10] Peter Clark, Oyvind Tafjord, and Kyle Richardson. Transformers as soft reasoners over language,
2020.
[11] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
[12] Alain Colmerauer and Philippe Roussel. The birth of prolog. In History of programming
_languages—II, pages 331–367. 1996._
[13] Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jiang, Bill Yuchen Lin, Sean
Welleck, Peter West, Chandra Bhagavatula, Ronan Le Bras, et al. Faith and fate: Limits of
transformers on compositionality. Advances in Neural Information Processing Systems, 36,
2024.
[14] Evelina Fedorenko and Rosemary Varley. Language and thought are not the same thing:
evidence from neuroimaging and neurological patients. Annals of the New York Academy of
_Sciences, 1369(1):132–153, 2016._
[15] Martin Gardner. Logic Machines and Diagrams. University of Chicago Press, Chicago, 1958.
[16] Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy
Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, et al. Folio: Natural language reasoning
with first-order logic. arXiv preprint arXiv:2209.00840, 2022.
[17] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset.
_arXiv preprint arXiv:2103.03874, 2021._
[18] William Stanley Jevons. I. on the mechanical performance of logical inference. Proceedings of
_the Royal Society of London, 18(114-122):166–169, 1870._
[19] Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: Learning to solve and explain algebraic word problems. arXiv preprint
_arXiv:1705.04146, 2017._
[20] Kyle Mahowald, Anna A Ivanova, Idan A Blank, Nancy Kanwisher, Joshua B Tenenbaum, and
Evelina Fedorenko. Dissociating language and thought in large language models: a cognitive
perspective. arXiv preprint arXiv:2301.06627, 2023.
[21] The mathlib Community. The lean mathematical library. In Proceedings of the 9th ACM
_SIGPLAN International Conference on Certified Programs and Proofs, POPL ’20. ACM, Jan-_
[uary 2020. doi: 10.1145/3372885.3373824. URL http://dx.doi.org/10.1145/3372885.](http://dx.doi.org/10.1145/3372885.3373824)
```
3373824.
```
[22] Allen Newell, John C Shaw, and Herbert A Simon. Report on a general problem solving
program. In IFIP congress, volume 256, page 64. Pittsburgh, PA, 1959.
[23] Maxwell Nye, Michael Tessler, Josh Tenenbaum, and Brenden M Lake. Improving coherence and consistency in neural sequence models with dual-system, neuro-symbolic reasoning.
_Advances in Neural Information Processing Systems, 34:25192–25204, 2021._
-----
[24] Theo X Olausson, Alex Gu, Benjamin Lipkin, Cedegao E Zhang, Armando Solar-Lezama,
Joshua B Tenenbaum, and Roger Levy. Linc: A neurosymbolic approach for logical reasoning
by combining language models with first-order logic provers. arXiv preprint arXiv:2310.15164,
2023.
[[25] OpenAI. Chatgpt: Optimizing language models for dialogue, 2022. URL https://openai.](https://openai.com/index/chatgpt/)
```
com/index/chatgpt/.
```
[26] OpenAI. Gpt-4 technical report. _ArXiv, abs/2303.08774, 2023._ [URL https://api.](https://api.semanticscholar.org/CorpusID:257532815)
```
semanticscholar.org/CorpusID:257532815.
```
[27] R OpenAI. Gpt-4 technical report. arXiv, pages 2303–08774, 2023.
[28] Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple
math word problems? arXiv preprint arXiv:2103.07191, 2021.
[29] Baolin Peng, Michel Galley, Pengcheng He, Hao Cheng, Yujia Xie, Yu Hu, Qiuyuan Huang, Lars
Liden, Zhou Yu, Weizhu Chen, et al. Check your facts and try again: Improving large language
models with external knowledge and automated feedback. arXiv preprint arXiv:2302.12813,
2023.
[30] Jack W. Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song,
John Aslanides, Sarah Henderson, Roman Ring, Susannah Young, Eliza Rutherford, Tom
Hennigan, Jacob Menick, Albin Cassirer, Richard Powell, George van den Driessche, Lisa Anne
Hendricks, Maribeth Rauh, Po-Sen Huang, Amelia Glaese, Johannes Welbl, Sumanth Dathathri,
Saffron Huang, Jonathan Uesato, John F. J. Mellor, Irina Higgins, Antonia Creswell, Nathan
McAleese, Amy Wu, Erich Elsen, Siddhant M. Jayakumar, Elena Buchatskaya, David Budden,
Esme Sutherland, Karen Simonyan, Michela Paganini, L. Sifre, Lena Martens, Xiang Lorraine
Li, Adhiguna Kuncoro, Aida Nematzadeh, Elena Gribovskaya, Domenic Donato, Angeliki
Lazaridou, Arthur Mensch, Jean-Baptiste Lespiau, Maria Tsimpoukelli, N. K. Grigorev, Doug
Fritz, Thibault Sottiaux, Mantas Pajarskas, Tobias Pohlen, Zhitao Gong, Daniel Toyama,
Cyprien de Masson d’Autume, Yujia Li, Tayfun Terzi, Vladimir Mikulik, Igor Babuschkin,
Aidan Clark, Diego de Las Casas, Aurelia Guy, Chris Jones, James Bradbury, Matthew G.
Johnson, Blake A. Hechtman, Laura Weidinger, Iason Gabriel, William S. Isaac, Edward
Lockhart, Simon Osindero, Laura Rimell, Chris Dyer, Oriol Vinyals, Kareem W. Ayoub, Jeff
Stanway, L. L. Bennett, Demis Hassabis, Koray Kavukcuoglu, and Geoffrey Irving. Scaling
language models: Methods, analysis & insights from training gopher. ArXiv, abs/2112.11446,
[2021. URL https://api.semanticscholar.org/CorpusID:245353475.](https://api.semanticscholar.org/CorpusID:245353475)
[31] Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Eric Hambro,
Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can
teach themselves to use tools. Advances in Neural Information Processing Systems, 36, 2024.
[32] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid,
Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al.
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models.
_arXiv preprint arXiv:2206.04615, 2022._
[33] Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid,
Adam Fisch, Adam R. Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, Agnieszka Kluska, Aitor Lewkowycz, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt,
Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman
Hussain, Amanda Askell, Amanda Dsouza, Ambrose Slone, Ameet Rahane, Anantharaman S.
Iyer, Anders Andreassen, Andrea Madotto, Andrea Santilli, Andreas Stuhlmüller, Andrew
Dai, Andrew La, Andrew Lampinen, Andy Zou, Angela Jiang, Angelica Chen, Anh Vuong,
Animesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa
Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin
Herrick, Avia Efrat, Aykut Erdem, Ayla Karaka¸s, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph,
Bartłomiej Bojanowski, Batuhan Özyurt, Behnam Hedayatnia, Behnam Neyshabur, Benjamin
Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orinion, Cameron
Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri Ramírez, Chandan
Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison-Burch, Chris
-----
Waites, Christian Voigt, Christopher D. Manning, Christopher Potts, Cindy Ramirez, Clara E.
Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo,
Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi,
Daniel Levy, Daniel Moseguí González, Danielle Perszyk, Danny Hernandez, Danqi Chen,
Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta,
Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke
Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong-Ho Lee, Dylan
Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hagerman, Elizabeth
Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang,
Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fernando Martínez-Plumed,
Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard
de Melo, Germán Kruszewski, Giambattista Parascandolo, Giorgio Mariani, Gloria Wang,
Gonzalo Jaimovitch-López, Gregor Betz, Guy Gur-Ari, Hana Galijasevic, Hannah Kim, Hannah
Rashkin, Hannaneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze,
Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack
Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B.
Simon, James Koppel, James Zheng, James Zou, Jan Koco´n, Jana Thompson, Janelle Wingfield,
Jared Kaplan, Jarema Radom, Jascha Sohl-Dickstein, Jason Phang, Jason Wei, Jason Yosinski,
Jekaterina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel,
Jesujoba Alabi, Jiacheng Xu, Jiaming Song, Jillian Tang, Joan Waweru, John Burden, John
Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose
Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum,
Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakrishnan, Katerina Ignatyeva, Katja Markert, Kaustubh D. Dhole, Kevin Gimpel, Kevin Omondi,
Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle
Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia ContrerasOchando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt,
Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem ¸Senel, Maarten Bosma, Maarten Sap,
Maartje ter Hoeve, Maheen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco
Marelli, Marco Maru, Maria Jose Ramírez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha
Lewis, Martin Potthast, Matthew L. Leavitt, Matthias Hagen, Mátyás Schubert, Medina Orduna
Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu,
Michael Ivanitskiy, Michael Starritt, Michael Strube, Michał Sw˛edrowski, Michele Bevilacqua,
Michihiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun
Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas
Roberts, Nick Doiron, Nicole Martinez, Nikita Nangia, Niklas Deckers, Niklas Muennighoff,
Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang,
Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth
Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy
Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr Miłkowski, Piyush
Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade,
Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm
Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan
Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang,
Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib
Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel
Gruetter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A.
Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann,
Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry
Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal,
Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi,
Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad,
Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop
Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo
Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick,
Timofei Kornev, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar
-----
Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak,
Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus,
William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi
Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin
Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai,
Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. Beyond the
imitation game: Quantifying and extrapolating the capabilities of language models, 2023.
[34] Oyvind Tafjord, Bhavana Dalvi Mishra, and Peter Clark. Proofwriter: Generating implications,
proofs, and abductive statements over natural language. arXiv preprint arXiv:2012.13048, 2020.
[35] Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis
Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. Galactica: A large language
model for science. arXiv preprint arXiv:2211.09085, 2022.
[36] Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, HengTze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. Lamda: Language models for
dialog applications. arXiv preprint arXiv:2201.08239, 2022.
[37] John Venn. Symbolic Logic. 1881.
[38] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha
Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language
models. arXiv preprint arXiv:2203.11171, 2022.
[39] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le,
Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models.
_Advances in Neural Information Processing Systems, 35:24824–24837, 2022._
[40] Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung
Kim, Jacob Andreas, and Yoon Kim. Reasoning or reciting? exploring the capabilities and
limitations of language models through counterfactual tasks. arXiv preprint arXiv:2307.02477,
2023.
[41] Honghua Zhang, Liunian Harold Li, Tao Meng, Kai-Wei Chang, and Guy Van den Broeck. On
the paradox of learning to reason from data. arXiv preprint arXiv:2205.11502, 2022.
-----
| [
"Steven T., Piantadosi",
"Nasim, Borazjanizadeh"
] | 2024-07-19T00:00:00 | null | false | 1 | 0 | [
"Prolog"
] | http://arxiv.org/abs/2407.11373 | https://arxiv.org/abs/2407.11373 | https://www.semanticscholar.org/paper/a6575b772e1da0ff2cdee8f0f586ec84e89f3f6f |
Self-Consistency Boosts Calibration for Math Reasoning | Calibration, which establishes the correlation between accuracy and model confidence, is important for LLM development. We design three off-the-shelf calibration methods based on self-consistency (Wang et al., 2022) for math reasoning tasks. Evaluation on two popular benchmarks (GSM8K and MathQA) using strong open-source LLMs (Mistral and LLaMA2), our methods better bridge model confidence and accuracy than existing methods based on p(True) (Kadavath et al., 2022) or logit (Kadavath et al., 2022). | Three off-the-shelf calibration methods based on self-consistency for math reasoning tasks better bridge model confidence and accuracy than existing methods based on p(True) (Kadavath et al., 2022) or logit (Kadavath et al., 2022). | ## Self-Consistency Boosts Calibration for Math Reasoning
**Ante Wang[1][,][2], Linfeng Song[3], Ye Tian[3], Baolin Peng[3], Lifeng Jin[3], Haitao Mi[3],**
**Jinsong Su[1][,][2]** and Dong Yu[3]
1School of Informatics, Xiamen University, China
2Key Laboratory of Digital Protection and Intelligent Processing of Intangible Cultural Heritage
of Fujian and Taiwan (Xiamen University), Ministry of Culture and Tourism, China
3Tencent AI Lab, Bellevue, WA
[email protected], [email protected]
**Abstract** 1 logit
Calibration, which establishes the correlation
between accuracy and model confidence, is important for LLM development. We design three
off-the-shelf calibration methods based on selfconsistency (Wang et al., 2022) for math reasoning tasks. Evaluation on two popular benchmarks (GSM8K and MathQA) using strong
open-source LLMs (Mistral and LLaMA2), our
methods better bridge model confidence and accuracy than existing methods based on p(True)
(Kadavath et al., 2022) or logit (Guo et al.,
2017).
0.8
0.6
|Col1|Col2|Col3|
|---|---|---|
|logit p(True) SC w/ FCN|||
0.4
0.2
logit
p(True)
SC w/ _CN_
_F_
0.1 0.3 0.5 0.7 0.9
Bins of confidence level
Figure 1: Comparison of several calibration methods on
Mistral-7B, where SC w/ FCN is one of our methods
based on self-consistency, which will be introduced in
§3.
**1** **Introduction**
Mathematical reasoning tasks (Cobbe et al., 2021;
Hendrycks et al., 2021; Amini et al., 2019) involve mapping a question into a series of equations, which are then solved to obtain the final
answer. Math reasoning has long been recognized
challenging. Existing solutions propose to map input questions into equations via semantic parsing
(Matsuzaki et al., 2017; Hopkins et al., 2017) or
AST decoding (Li et al., 2019; Qin et al., 2021;
Wu et al., 2021). Yet, the performance can degradate dramatically even with slight changes to the
questions (Patel et al., 2021; Li et al., 2022).
Recently, large language models (LLM, Achiam
et al. 2023; Touvron et al. 2023; Jiang et al. 2024)
have shown great potential for solving many math
reasoning tasks, even though they are not specifically trained on these tasks. For instance, with
chain-of-thought prompting (Wei et al., 2022) and
self-consistency (Wang et al., 2022), open-source
LLMs, such as Mixtral 8×7B (Jiang et al., 2024),
can reach an accuracy of around 80% on the
GSM8K benchmark (Cobbe et al., 2021). On the
other hand, conventional pretrained models (e.g.,
T5 (Raffel et al., 2020)) that are specifically finetuned on the GSM8K training set can only report
accuracies around 10% to 20% (Shridhar et al.,
2023; Magister et al., 2023).
However, LLMs lack adequate calibration out
of the box – the probabilities of model predictions
are often poorly aligned with the actual accuracy
(Xiong et al., 2023; Chen et al., 2023). Calibration is important for LLM development, as a wellcalibrated LLM can precisely tell how likely its
responses are correct or not. With such information, LLM developers can take multiple options to
handle low-confidence responses, such as letting
the LLM refuse to answer or keep resampling until
a confident response is produced.
In this work, we propose calibration methods
based on self-consistency (Wang et al., 2022) for
math reasoning tasks. Self-consistency performs
clustering over multiple LLM samples before picking one from the largest cluster as the response to
each input query. Here we consider several ways
to estimate model confidence using the clustering
results: cluster size that estimates how many samples agree with the selected one, cluster number
that measures to what extent samples disagree with
each other, and pairwise comparison that captures
relative differences between pairs of clusters.
-----
We conduct experiments using strong opensource LLMs: Mistral (Jiang et al., 2023, 2024)
and LLaMA2 (Touvron et al., 2023) series models
with / without being aligned with instructions. Results on GSM8K (Cobbe et al., 2021) and MathQA
(Amini et al., 2019) show that all our methods
better calibrate these models than exiting popular
methods, such as p(True) (Kadavath et al., 2022)
and logit (Guo et al., 2017) over the whole reasoning path or target answer span only.
**2** **Preview: Self-Consistency with CoT**
**Prompting**
For math reasoning, there are usually multiple trajectories to reach the final solution. To replicate this
process, Wang et al. (2022) initially sample various reasoning paths r1, ..., rN from the LLM given
input x with Chain-of-Thought (CoT) prompting.[1]
Then, the answers a1, ..., aN are extracted from the
paths, and the most consistent answer (the one win
by majority vote among the answers) is selected as
the final answer a:
it by “1 − _x”:_
_CN_ (x, θ) = 1 (2)
_F_ _−_ _[|C|]N [.]_
**Cluster Size** In a similar vein, we adopt the Clus_ter Size: the number of samples (e.g., ni) within_
a specific cluster (e.g., ci). Again, we compute
its proportion relative to the total sample size to
normalize the score into the range [0, 1]:
_CS(x, θ) =_ _[n][i]_ (3)
_F_ _N [.]_
In contrast to the cluster number, the cluster size is
more universally applicable across diverse prompts,
as the cluster number can easily become ineffective
when the output space of an LLM is restricted, such
as when options for a question are provided.
**Pairwise Comparison** The Cluster Number and
_Cluster Size primarily consider the number of dis-_
tinct answers and the number of sampled paths
within a single cluster, respectively. They both
overlook the information by comparing different
clusters. For example, they may fail to consider the
situation when the sizes of the top-ranked clusters
are close. Consequently, we introduce the Pairwise
_Comparison method, which computes the winning_
rate of the chosen cluster (ci) against each of the
remaining clusters:
1(ai = ˆa),
_i=1_
X
**a = max**
(1)
_ri, ai_ `LLMθ(x),`
_∼_
where ri, ai denote the i-th sampled reasoning path
and its corresponding answer, respectively.
**3** **Calibration using Self-Consistency**
After performing self-consistency on input x using
```
LLMθ, we obtain a set of clusters C = {c1, ..., c|C|}
```
with each cluster ci comprising ni sampled responses with the same answers. We design the following strategies, tailored to the characteristics of
these clusters, to estimate the confidence of LLMθ.
**Cluster Number** We initially consider the Clus_ter Number |C|. This is motivated by the finding_
of previous work (Wang et al., 2022; Xiong et al.,
2023): LLMs tend to generate consistent answers
when they are confident about their predictions,
and thus the cluster number (number of distinct
answers) tends to be small. We further divide the
cluster number by the sample size N to normalize
the score into the range of [0, 1], before reversing
1Here we follow common practice to adopt demonstrations
with rationales for pretrained only models (e.g., Mistral-7B)
and use “Let’s think step by step” (Kojima et al., 2022) for
instruction-tuned models (e.g., Mistral-7B-Inst).
_|C|_
Yj≠ _i_
_ni_
_,_ (4)
_ni + nj_
_PC(x, θ) =_
_F_
where _nin+inj_ [represents the winning rate of selected]
cluster ci against another cluster cj.
**4** **Experiments**
**4.1** **Setup**
**Datasets** We conduct experiments on two popular math reasoning benchmarks of different
type of questions, GSM8K (Cobbe et al., 2021)
and MathQA (Amini et al., 2019). Particularly,
GSM8K comprises 1,319 linguistically diverse
grade school math word problems for testing. On
the other hand, MathQA offers 2,985 multiple_choice math word problems for evaluation._
**Evaluation Metrics** We adopt Brier Score and
Expected Calibration Error (ECE) as evaluating
metrics following common practice (Geng et al.,
2023).
-----
Mistral-7B Mistral-7B-Inst Mixtral-8×7B Mixtral-8×7B-Inst
ECE ↓ Brier ↓ ECE ↓ Brier ↓ ECE ↓ Brier ↓ ECE ↓ Brier ↓
logit w/ Path 0.394 0.399 0.414 0.414 0.178 0.265 0.233 0.252
logit w/ Answer 0.505 0.488 0.467 0.458 0.307 0.312 0.236 0.238
p(True) 0.127 0.267 0.406 0.407 **0.070** 0.201 0.195 0.198
_Self-Consistency_
w/ FCN **0.092** 0.186 **0.125** 0.182 0.136 0.157 **0.075** 0.092
w/ FCS 0.148 **0.185** 0.163 **0.180** 0.173 **0.156** 0.085 **0.086**
w/ FP C 0.248 0.229 0.253 0.226 0.238 0.194 0.110 0.096
logit w/ Path 0.500 0.499 0.539 0.510 0.333 0.380 0.364 0.373
logit w/ Answer 0.356 0.362 0.291 0.319 0.266 0.290 0.220 0.281
p(True) 0.350 0.309 0.271 0.317 0.228 0.253 0.273 0.272
_Self-Consistency_
w/ FCN 0.331 0.336 0.374 0.359 0.143 0.236 0.128 0.215
w/ FCS 0.091 0.225 0.114 0.227 **0.080** **0.190** **0.035** **0.171**
w/ FP C **0.052** **0.220** **0.065** **0.219** 0.143 0.203 0.054 0.174
GSM8K
MathQA
Table 1: Main test results on GSM8K and MathQA when using Mistral family models. Specifically, ∗-Inst indicates
instruction-tuned models.
**Settings** We conduct experiments on LLaMA2
and Mistral-family models and investigate both pretrained or instruction-tuned variations. We use nucleus sampling to obtain N = 16 samples by default for each instance and use temperatures of 0.6
/ 1.0 for all pretrained / instruction-tuned models.
**Baselines** We take the three representative baselines below for comparison:
- logit w/ Path: It averages the probabilities of
the tokens from the whole path to estimate the
confidence of each prediction.
- logit w/ Answer: It is similar to logit w/ Path
but only consider the tokens from the predicted
answer span.
- p(True): It asks the LLM itself to classify its prediction as True or False. Then, it takes the predicted probability of True as its confidence. We
follow Kadavath et al. (2022) to construct 8-shot
demonstrations for prompting pretrained models but directly use instruction for instructiontuned models.
**4.2** **Results and Analysis**
**Main Results** Table 1 presents the main results
obtained from both benchmarks using Mistralfamily models. p(True) performs best among the
baselines, echoing the findings of Kadavath et al.
(2022). However, due to its reliance on prompt
design and in-context examples to aid the LLM
to classify its predictions, it can be challenging to
construct effective demonstrations or instructions.
Given instances (x1, y1), ..., (x ¯N _[, y]N[ ¯]_ [)][ and their]
corresponding LLM predictions ˆy1, ..., ˆy ¯N [, ECE]
is computed by first binning the predictions into
_M = 10 intervals based on their LLM confidence_
levels (e.g., p(ˆyi)). For each bin (e.g. Bm), it then
calculates the accuracy (acc(Bm)) and the average
confidence (conf(Bm)):
1(yi = ˆyi),
_i∈XBm_ (5)
_p(ˆyi),_
_i∈XBm_
acc(Bm) =
conf(Bm) =
_Bm_
_|_ _|_
1
_Bm_
_|_ _|_
where _Bm_ is the number of samples in bin Bm.
_|_ _|_
Finally, the difference between accuracy and confidence is averaged across all bins to obtain the ECE
score:
_|Bm|_ (6)
_N¯_
_[|][acc][(][B][m][)][ −]_ [conf][(][B][m][)][|]
ECE =
_m=1_
As another popular metric, Brier score is similar
to ECE but conducted at the instance level:
Brier = [1]
¯
(p(ˆyi) − 1(yi = ˆyi))[2]. (7)
_i=1_
X
Both metrics range from 0 to 1 with lower values
indicating better calibration. We take Brier score as
the main metric, as it is more robust to unbalanced
distribution across bins (e.g. instances concentrate
to one or two bins).
-----
0.3
0.25
0.16
0.14
0.8
0.6
0.2
0.15
0.12
0.1
0.1
0.05
0.4
0.08
|FCS FCN FPC|FCS FCN FPC|Col3|
|---|---|---|
Accuracy
_CS_
_F_
_CN_
_F_
_PC_
_F_
4 8 16 32 64
N
Figure 2: Calibration results on GSM8K when using
Mixtral-8×7B-Inst with different N .
Model
Figure 3: Performance and calibration results on
GSM8K using different models below sorted by their
performance: ① LLaMA2-7B-Chat, ② LLaMA2-13BChat, ③ Mistral-7B-Inst, ④ LLaMA2-70B-Chat, ⑤
Mixtral-8×7B-Inst.
In general, self-consistency-based methods surpass baselines in most cases regarding Brier and
ECE, validating the efficacy of employing selfconsistency features for estimating model confidence. We also note that baselines can occasionally
yield impressive ECE scores (p(True) on GSM8K
with Mixtral-8×7B). However, we observe that this
is attributed to the concentration of most samples
in just a few bins (e.g., Figure 1), leading to unreliable measurements. Nevertheless, our approaches
still exhibit strong performance in terms of ECE
scores across various settings.
Among the self-consistency-based methods,
_FCN yields better ECE results on GSM8K, while_
_FCS achieves the highest Brier score. Conversely,_
for MathQA, FCN performs significantly worse
than the other two. This is because MathQA is a
_multi-choice task, and thus the cluster number of_
LLM answers is strictly limited by the provided
choices. In conclusion, FCS demonstrates greater
generality across diverse settings, but FCN and
_FPC do offer improved estimation in certain cases._
**Influence of Sample Size N** Previous research
(Wang et al., 2022) has demonstrated that the sample size N can significantly affect the accuracy of
self-consistency. When N increases, the model
performance initially continues to improve before
stabilizing once N reaches a sufficient level. Therefore, we take Mixtral-8×7B-Inst as a case study to
examine the impact of N on calibration.
As illustrated in Figure 2, the Brier scores for
all our methods initially decline and then remain
constant as N grows. For FCS and FPC, N = 8
is adequate for accurate estimation. In contrast,
_FCN requires a larger N_, indicating that the cluster
number is more susceptible to the randomness of
sampling.
**Correlation between Performance and Calibra-**
**tion** We finally explore the associations between
model performance (Accuracy) and calibration.
Figure 3 showcases the results on instruction-tuned
LLaMA2 and Mistral series models, arranged in
ascending order based on their performance. We
generally observe a positively correlated trend between calibration (lower the better) and performance (higher the better) among the studied models. This observation indicates that more powerful
models also exhibit enhanced calibration, echoing
the findings of Kadavath et al. (2022). This phenomenon can be attributed to the fact that when a
tested LLM is stronger, it is capable of generating
more reasonable and consistent responses, leading
to improved calibration.
**5** **Conclusion**
In this paper, we extend the widely-used inference
strategy, self-consistency, to the field of calibration. Specifically, we develop three off-the-shelf
calibration methods based on self-consistency for
math reasoning tasks. Compared to conventional
methods (p(True) and logit), our approaches yield
significantly improved ECE and Brier scores on
popular GSM8K and MathQA datasets. Future
research directions include designing more effective calibration methods, leveraging richer features
and employing more strategies (e.g., temperature
_scaling (Guo et al., 2017)) to enhance calibration_
performance. Our ultimate goal is to construct reliable and honest LLMs with the help of accurate
confidence estimation.
-----
**Limitations**
Our methods are founded on the principle of selfconsistency, which relies on sampling multiple
times for prediction. This approach, however,
needs additional cost for inference, which may
not be efficient and eco-friendly. Besides, our
current work is limited to mathematical problems
and does not explore other types of tasks, such as
question-answering. Although it is crucial to extend our methods to encompass other tasks, this is
non-trivial due to the inherent difficulty in dividing certain tasks’ model predictions into distinct
clusters.
**Ethics Statement**
We focus on ethical AI research and strive to
achieve a balance between technological advancements and our ethical responsibilities. This work
studies calibration, which aims to enhance the reliability of LLMs. Besides, we conduct experiments
only on publicly available datasets, upholding privacy and anonymity rules.
**References**
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama
Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman,
Shyamal Anadkat, et al. 2023. Gpt-4 technical report.
_arXiv preprint arXiv:2303.08774._
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. Mathqa: Towards interpretable math
word problem solving with operation-based formalisms. In Proceedings of the 2019 Conference
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
2357–2367.
Yangyi Chen, Lifan Yuan, Ganqu Cui, Zhiyuan Liu, and
Heng Ji. 2023. A close look into the calibration of
pre-trained language models. In Proceedings of the
_61st Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
1343–1367.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Jiahui Geng, Fengyu Cai, Yuxia Wang, Heinz Koeppl,
Preslav Nakov, and Iryna Gurevych. 2023. A survey of language model confidence estimation and
calibration. arXiv preprint arXiv:2311.08298.
Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. 2017. On calibration of modern neural networks. In International conference on machine learn_ing, pages 1321–1330. PMLR._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the math dataset. In Thirty_fifth Conference on Neural Information Processing_
_Systems Datasets and Benchmarks Track (Round 2)._
Mark Hopkins, Cristian Petrescu-Prahova, Roie Levin,
Ronan Le Bras, Alvaro Herrasti, and Vidur Joshi.
2017. Beyond sentential semantic parsing: Tackling
the math sat with a cascade of tree transducers. In
_Proceedings of the 2017 Conference on Empirical_
_Methods in Natural Language Processing, pages 795–_
804.
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral
7b. arXiv preprint arXiv:2310.06825.
Albert Q Jiang, Alexandre Sablayrolles, Antoine
Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas,
Emma Bou Hanna, Florian Bressand, et al. 2024.
Mixtral of experts. arXiv preprint arXiv:2401.04088.
Saurav Kadavath, Tom Conerly, Amanda Askell, Tom
Henighan, Dawn Drain, Ethan Perez, Nicholas
Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli
Tran-Johnson, et al. 2022. Language models
(mostly) know what they know. _arXiv preprint_
_arXiv:2207.05221._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in
_neural information processing systems, 35:22199–_
22213.
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang, Bing Tian
Dai, and Dongxiang Zhang. 2019. Modeling intrarelation in math word problems with different functional multi-head attentions. In Proceedings of the
_57th annual meeting of the association for computa-_
_tional linguistics, pages 6162–6167._
Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou,
Chao Li, Hongzhi Liu, and Yunbo Cao. 2022. Seeking patterns, not just memorizing procedures: Contrastive learning for solving math word problems. In
_Findings of the Association for Computational Lin-_
_guistics: ACL 2022, pages 2486–2496._
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2023.
Teaching small language models to reason. In Pro_ceedings of the 61st Annual Meeting of the Associa-_
_tion for Computational Linguistics (Volume 2: Short_
_Papers), pages 1773–1781._
-----
Takuya Matsuzaki, Takumi Ito, Hidenao Iwane, Hirokazu Anai, and Noriko H Arai. 2017. Semantic
parsing of pre-university math problems. In Proceed_ings of the 55th Annual Meeting of the Association for_
_Computational Linguistics (Volume 1: Long Papers),_
pages 2131–2141.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
2021. Are nlp models really able to solve simple
math word problems? In Proceedings of the 2021
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094._
Jinghui Qin, Xiaodan Liang, Yining Hong, Jianheng
Tang, and Liang Lin. 2021. Neural-symbolic solver
for math word problems with auxiliary tasks. In
_Proceedings of the 59th Annual Meeting of the Asso-_
_ciation for Computational Linguistics and the 11th_
_International Joint Conference on Natural Language_
_Processing (Volume 1: Long Papers), pages 5870–_
5881.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text
transformer. Journal of machine learning research,
21(140):1–67.
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya
Sachan. 2023. Distilling reasoning capabilities into
smaller language models. In Findings of the Associa_tion for Computational Linguistics: ACL 2023, pages_
7059–7073.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H Chi, Sharan Narang, Aakanksha Chowdhery,
and Denny Zhou. 2022. Self-consistency improves
chain of thought reasoning in language models. In
_The Eleventh International Conference on Learning_
_Representations._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in neural
_information processing systems, 35:24824–24837._
Qinzhuo Wu, Qi Zhang, Zhongyu Wei, and Xuan-Jing
Huang. 2021. Math word problem solving with explicit numerical values. In Proceedings of the 59th
_Annual Meeting of the Association for Computational_
_Linguistics and the 11th International Joint Confer-_
_ence on Natural Language Processing (Volume 1:_
_Long Papers), pages 5859–5869._
Miao Xiong, Zhiyuan Hu, Xinyang Lu, YIFEI LI, Jie
Fu, Junxian He, and Bryan Hooi. 2023. Can llms
express their uncertainty? an empirical evaluation of
confidence elicitation in llms. In The Twelfth Inter_national Conference on Learning Representations._
-----
| [
"Ante, Wang",
"Baolin, Peng",
"Linfeng, Song",
"Ye, Tian",
"Lifeng, Jin",
"Haitao, Mi",
"Dong, Yu",
"Jinsong, Su"
] | 2024-03-14T00:00:00 | null | false | 1 | 0 | null | http://arxiv.org/abs/2403.09849 | https://arxiv.org/abs/2403.09849 | https://www.semanticscholar.org/paper/073d5e2a7da5b8991dc1030ea813f6a94a731777 |
Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models | The alignments of reasoning abilities between smaller and larger Language Models are largely conducted via Supervised Fine-Tuning (SFT) using demonstrations generated from robust Large Language Models (LLMs). Although these approaches deliver more performant models, they do not show sufficiently strong generalization ability as the training only relies on the provided demonstrations. In this paper, we propose the Self-refine Instruction-tuning method that elicits Smaller Language Models to self-refine their abilities. Our approach is based on a two-stage process, where reasoning abilities are first transferred between LLMs and Small Language Models (SLMs) via Instruction-tuning on demonstrations provided by LLMs, and then the instructed models Self-refine their abilities through preference optimization strategies. In particular, the second phase operates refinement heuristics based on the Direct Preference Optimization algorithm, where the SLMs are elicited to deliver a series of reasoning paths by automatically sampling the generated responses and providing rewards using ground truths from the LLMs. Results obtained on commonsense and math reasoning tasks show that this approach significantly outperforms Instruction-tuning in both in-domain and out-domain scenarios, aligning the reasoning abilities of Smaller and Larger Language Models. | This paper proposes the Self-refine Instruction-tuning method that elicits Smaller Language Models to self-refine their abilities, and shows that this approach significantly outperforms Instruction-tuning in both in-domain and out-domain scenarios, aligning the reasoning abilities of Smaller and Larger Language Models. | ## Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models
**Leonardo Ranaldi** [(][†][)], Andrè Freitas[(][†][,][∗][)]
(†) Idiap Research Institute, Martigny, Switzerland
(∗)Department of Computer Science, University of Manchester, UK
[name].[surname]@idiap.ch
**Abstract**
The alignments of reasoning abilities between
smaller and larger Language Models are largely
conducted via Supervised Fine-Tuning (SFT)
using demonstrations generated from robust
Large Language Models (LLMs). Although
these approaches deliver more performant models, they do not show sufficiently strong generalization ability as the training only relies on
the provided demonstrations.
In this paper, we propose the Self-refine
_Instruction-tuning method that elicits Smaller_
Language Models to self-refine their abilities. Our approach is based on a two-stage
process, where reasoning abilities are first
transferred between LLMs and Small Language Models (SLMs) via Instruction-tuning
on demonstrations provided by LLMs, and then
the instructed models Self-refine their abilities
through preference optimization strategies.
In particular, the second phase operates refinement heuristics based on the Direct Preference
Optimization algorithm, where the SLMs are
elicited to deliver a series of reasoning paths
by automatically sampling the generated responses and providing rewards using ground
truths from the LLMs. Results obtained on
commonsense and math reasoning tasks show
that this approach significantly outperforms
Instruction-tuning in both in-domain and outdomain scenarios, aligning the reasoning abilities of Smaller and Larger Language Models.
commonsense (Bubeck et al., 2023), symbolic and
mathematical (Gaur and Saunshi, 2023; Liu et al.,
2023) reasoning datasets.
Since the size of LLMs represents an adoption
barrier for many use cases and smaller models do
not seem to have the same emergent reasoning abilities as LLMs, several state-of-the-art alignment approaches for solving mathematical problems have
emerged, where Supervised Fine-Tuning (SFT) has
been used to train Small Language Models (SLMs)
using CoT annotations. However, these annotations
outline the intermediate reasoning steps for solving a given problem, which consists of a reasoning
pathway generated by the LLM for the specific
case. This phenomenon can lead to a relatively
weak generalization capacity of tuned models that
have a few and limited number of samples. Indeed,
there are often multiple valid CoT annotations for
the same question (Cobbe et al., 2021; Zhang et al.,
2023), which underlines the need for a more general CoT-based fine-tuning approach.
In this paper, we propose Self-refine Instruction_tuning, which is a method to enable CoT reasoning_
over SLMs. Our approach starts by performing
Instruction-tuning on SLMs via demonstrations delivered by LLMs and then applies preference optimization based on reinforcement learning (RL)
heuristics to let the SLMs refine their abilities to
solve a task in a step-wise manner. Hence, proposing a teacher-student alignment method, we investigate the impact of transferring Chain-of-Thought
reasoning abilities through the support of Demonstrations "taught" by LLMs to SLMs as a warm-up
to the Self-refine process. Therefore, to reinforce
the Instruction-tuning phase, we analyze whether
preference optimization methods could strengthen
students’ step-wise reasoning abilities.
Complementing the foundation work of (Wang
et al., 2023c,d), we introduce Self-refinement based
on reinforcement learning, and in contrast to (Uesato et al., 2022; Luo et al., 2023; Luong et al.,
**1** **Introduction**
Previous works have demonstrated that Chain-ofThought (CoT) prompting can improve the Large
Language Models (LLMs) [1] capacity to perform
complex reasoning tasks by decomposing a reasoning task into a sequence of intermediate steps
(Wei et al., 2022), where the generation of multistep controlled reasoning can improve results in
1(e.g., with more than 60B parameters (Wei et al., 2023))
-----
Figure 1: In Self-refine Instruction-tuning, the Demonstrations delivered by teacher models are used to align
reasoning abilities in a teacher-student setting. Following the transference of step-wise reasoning knowledge via
instruction tuning, the students Self-refine their abilities with the support of Direct Preference Optimization methods.
2024; Paul et al., 2024), we use an Instructiontuning via Demonstrations approach (Ranaldi and
Freitas, 2024) (i.e., a task-oriented specialization
of Supervised Fine-Tuning) through which we instruct SLMs using Demonstrations delivered from
different teachers prompted via a CoT mechanism.
This leads to the target research questions, which
are the focus of this paper:
**RQ1: How does Instruction-tuning via Demon-**
strations initialize the SLMs’ reasoning abilities?
**RQ2: What is the effect of the preference op-**
timization algorithm on the alignment between
teacher and student models?
**RQ3: How much does the ability to solve tasks**
in a multi-step manner improve across different
scenarios?
To answer these questions, we select three different SLMs: Llama2-7b, -13b (Touvron et al.,
2023), Mistral-7b (Jiang et al., 2023); and three
LLMs Llama2-70b, Mixtral (Jiang et al., 2024) and
GPT-3.5 (OpenAI, 2023). In the teacher-student
alignment phase, we use LLMs (teachers) to deliver Demonstrations at the core of the Instructiontuning process (see Figure 1) used to instruct SLMs
(students). In the Self-refine phase, the students improve their step-wise reasoning abilities via Direct
Preference Optimization (DPO) (Rafailov et al.,
2023). This allows the students to sample different
reasoning paths and CoT Demonstrations and learn
from them (Figure 1). Moreover, differently from
previous works, preferences are self-generated, and
there is no need for a separately trained reward
model as in the previous approaches (Ouyang et al.,
2022). We demonstrate the effectiveness of the
proposed refinement technique in aligning teacherstudent models (overcoming the differences highlighted by Ranaldi and Freitas (2024)) from the
same family and in maximizing efficiency in indomain and out-domain tasks.
Our contributions can be summarized as follows:
- We propose the Self-refined Instruction-tuning
approach that is a task-oriented Supervised
Fine-Tuning (SFT), which utilizes DPO
heuristics to conduct a self-refinement process
starting from instructed SLMs.
- We analyze the impact of different configurations of Instruction-tuning on the SLMs before and after the Self-refining phase by conducting in-depth experiments on mathematical problems and common sense questionanswering tasks using Demonstrations delivered by teacher of the same family (in-family)
or not (out-family). Hence, we show the downstream functionalities in both scenarios.
- Finally, we display the generalization abilities acquired via Self-refined Instructiontuning through a systematic evaluation using
Demonstrations provided by in-family and
out-family teachers, both within in-domain
and out-domain tasks.
**2** **Method**
To transfer the step-wise reasoning properties from
Large Language Models (LLMs) to Small Language Models (SLMs), we propose Self-refine
_Instruction-tuning, a two-step approach as shown_
in Figure 1. In the first phase, there is a transfer of
step-wise (CoT) reasoning via Instruction-tuning,
where LLMs systematically generate Demonstrations which are used by SLMs to initialize their
step-wise (CoT) alignment (Section 2.1). In the
second phase, the instructed SLMs Self-refine their
internal CoT model via the preference optimization
technique presented in Section 2.2.
**2.1** **Instruction-tuning Phase**
A significant part of the state-of-the-art works employs standard Supervised Fine-Tuning (SFT) performed on annotations produced by a single LLM
-----
(Large Language Model) as a mechanism to improve SLMs. In our contribution, we take a step
further and use Instruction-tuning, which is a taskoriented specialization of SFT (Supervised FineTuning), in coordination with a teacher-student
alignment approach (detailed in Appendix A). In
this phase, the SLM (student) is fine-tuned on a
dataset produced by LLM (teacher) comprising a
set of tuples in the form of (i, q, ai), where i represents a specific instruction, q is the input question
(e.g., math-word problem), and ai is the expected
output and CoT answers generated from the teacher
in response to the instruction and input. This setup
is intended to transfer to the student models foundational problem-solving abilities, emphasizing the
generation of outputs that conform to the provided
instructions. The CoT answer ai is articulated as:
_ai = [w1, w2, . . ., wl_ 1, wl]
_−_
with l indicating the sequence length. At each
timestep t, the action wt is derived from the policy πθ(·|st), where wt can be any token from the
models vocabulary, and the state st encapsulates
the concatenation of all previously generated tokens and the optional input x if provided. The state
transition is defined as:
be conducted in an SFT style, relying exclusively
on labeled preference data. The policy model, defined as πθ, learns by repeatedly sampling the answers generated by teachers and students.
**Direct Preference Optimization** In the standard
DPO approach (Rafailov et al., 2023), a human annotator ranks the outputs from a reference policy,
labeling winning and losing pairs yw = πinst(x)
and yl = πinst(x). However, we propose an optimization step via Self-generated annotation by
the students πinst, which, after Instruction-tuning,
should have more robust performances and reliably
follow the demands of the questions.
For each Demonstration (i, x, ai), we prompt
the students using the input x = i + q ( or xCoT =
_x + "Let’s think step by step") (blue block_
in Figure 1). Hence, for each instance within the
Demonstrations we collect the Answers (ya =
_πinst(x)) that are the answers generated by the_
student given the input x, and the CoT-Answers
(yCoT = πinst(xCoT )) are the answers that deliver
CoT generated by the student elicited via CoT
mechanism xCoT .
In particular, assuming it is preferable for the
model to generate responses that provide a CoT
when elicited with xCoT and responses when
prompted with x just as the corresponding LLM
teacher would do, we propose an alignment by exploiting DPO optimization. This aims to move the
default style of our model (response generated by
the student) towards the desired style (answers that
deliver CoT). Different configurations are proposed
depending on the desired result. Starting from the
standard equation 1:
DPO(πθ; πinst) = E(x,yw,yl) _D_
_L_ _−_ _∼_ (1)
[log σ(M (x, yw, yl))]
where σ is the sigmoid function, and
_M_ (x, yw, yl) = β log _[π][θ][(][y][w][|][x][)]_
_πsft(yw|x)_ _[−][β][ log][ π]πsf[θ]t[(]([y]y[l]l[|]|[x]x[)])_
(2)
where β is a hyperparameter.
We propose the Self-refine Instruction-tuning
that uses as optimization technique DPOCoT (described in details in Appendix B in Equation 3.
In particular, in DPOCoT the answers that deliver
a CoT response which is self-generated from the
students are referred to as the preferred response.
(x, i) if t = 0
[st, wt] if 1 _t_ _l_
_≤_ _≤_
_st+1 =_
The Instruction-tuning loss function explicitly
integrates the instruction i, aligning the models’
learning process with the instructional context.
This loss function is formulated as:
_L_
log πθ(wt _st, i)_
_|_
" _t=1_
X
_Linst(θ) = −E(i,q,ai)∼D_
Here, πθ is conditioned on both the state st, the
input q, and the instruction i, ensuring that the
model prioritizes instruction compliance in its output generation. This methodological shift from
SFT to Instruction-tuning underlines the principle
of enhancing the models’ ability to accurately interpret and execute complex instructions.
**2.2** **Self-refinement Phase**
In the second phase, the instructed SLMs (students)
that have improved CoT properties via Instructiontuning (Section 2.1) self-refine these properties
with the support of Direct Preference Optimization
(DPO) (Rafailov et al., 2023). This refinement can
-----
**3** **Experimental Setup**
In order to evaluate the proposed model, we use
both commonsense and mathematical reasoning
tasks (introduced in Section 3.1) that are generally
used to assess the step-wise inference properties
of Large Language Models (LLMs). Regarding
the Self-refine Instruction-tuning on the Small Language Models (SLMs), we use the approach presented in Section 3.2.
**3.1** **Tasks & Datasets**
In this paper, we selected different tasks that focus
on reasoning tasks:
**Commonsense Task** We adopt two benchmarks
to evaluate commonsense reasoning: CommonSenseQA (Talmor et al., 2019) (CSQA) and OpenBookQA (Mihaylov et al., 2018) (OBQA) are two
multi-choice commonsense question-answering
tasks.
**Physical & Social Interaction Task** We adopt
two benchmarks to evaluate reasoning in the context of everyday situations, aiming to establish the
most reasonable solution: Interaction Question Answering (PIQA) (Bisk et al., 2019) and Social Interaction Question Answering (SIQA) (Sap et al.,
2019), which emphasizes people’s actions and social implications.
**Mathematical Task** We use two math word problem benchmarks to evaluate the models of mathematical reasoning. MultiArith (Roy and Roth,
2015) covers a set of multi-step arithmetic reasoning tasks, while GSM8k (Cobbe et al., 2021) covers
a set of primary school-level mathematical problems.
**Additional benchmarks** Finally, to evaluate the
adaptability of our proposal, we conduct further
analysis on two additional evaluation benchmarks:
MATH (Hendrycks et al., 2021b), and MMLU
(Hendrycks et al., 2021a).
**Datasets** Since the test split is not prescribed for
all the benchmarks, we adopt the following strategy: for SIQA, PIQA, CSQA, and OBQA, we
use 4000 examples with equally distributed target classes as training data and the validation versions found on huggingface as test data, while for
GSM8K and MultiArith we use the full huggingface datasets. In Table 8, we report the descriptive statistics and splitting ratios, while in Table 7,
we report one example for each benchmark. The
supporting datasets are publicly accessible as described in Table 9.
**3.2** **Self-refine Instruction-tuning Pipeline**
The Self-refine Instruction-tuning comprises the
annotation process conducted by the LLMs teachers that are prompted in the zero-shot scenario (as
shown in Table 6), as explained in Appendix A. We
selected Llama-2-70 (Touvron et al., 2023), Mixtral7x8 (Jiang et al., 2024) and GPT-3.5 (OpenAI,
2023) as LLMs (teachers) and Llama2-7, -13 (Touvron et al., 2023) and Mistral-7 (Jiang et al., 2023)
SMLs (students) models.
Hence, the students models are tuned, as proposed in (Taori et al., 2023) and evaluated with
probing pipelines (detailed in Section 3.3). The
students are instructed via Demonstrations that contain the answers generated by the teachers, as explained in Section 2.1. Downstream of the teacherstudent CoT transference process, the optimization
technique (proposed in Section 2.2 and detailed in
Appendix B) is employed to improve alignment
and self-refine the quality of the generation.
**3.2.1** **Models Setup**
We conduct the Self-refined Instruction-tuning
in two different phases. Firstly, we start with
Instruction-tuning phase using QLoRA Dettmers
et al. (2023). This approach allows Instructiontuning to be performed while reducing memory
usage. In particular, Dettmers et al. (2023) propose
several techniques for tuning models with many parameters on GPUs with limited resources while preserving 16-bit tuning performance. We follow the
training approach proposed in (Taori et al., 2023),
setting four training epochs using a learning rate
of 2e-5 with a 1e-4 weight decay. We use the cosine learning rate scheduler with a warm-up ratio
of 0.03. Furthermore, we conduct the Self-refine
phase following the approach proposed in (Rafailov
et al., 2023). In particular, we use the huggingface
_DPOtrainer to support its reproducibility. We fol-_
low the parameters proposed in (Rafailov et al.,
2023). Hence, for the DPO policy, our work employs a learning rate of 1e-6, β set at 0.1, and a
warm-up step count of 100. The batch size is configured to 128. The optimization process is capped
at a maximum of 1000 steps, where we save the
checkpoint corresponding to the lowest loss on the
validation set. The experiments were conducted
on a workstation equipped with four Nvidia RTX
A6000 with 48GB of VRAM.
-----
Figure 2: Accuracies (%) on benchmarks (Section 3.1) before Instruction-tuning (i.e., Baselines and Baseline
CoT), after Instruction-tuning (IT) performed on Demonstrations delivering CoT and finally behind the Self-refine
Instruction-tuning phase (Self IT). In particular, the models were instructed via Demonstrations delivered by
in-family LLMs (as described in the legend, we use the notation method(Teacher->Student)).
**3.3** **Evaluation**
The most commonly used evaluation methods for
question-answering tasks are language-model probing, in which the option with the highest probability is selected (Brown et al., 2020), and multiplechoice probing, in which the models are asked to
commit to an answer. The evaluation in the first
case is performed with a function taking the argmax
and, in the second case, with a direct string matching. The second method is more widely used in
recent evaluations as it can be inclusive to the larger
GPT family models(OpenAI, 2023), where probability values are not readily accessible. In the
experiments, we chose the latter to have a comparable and scalable pipeline (Details provided in Appendix C.2). Finally, string matching is performed
between the generated outputs and the target choice
to evaluate the percentages of the correct answers.
**4** **Results & Discussion**
The Self-refine Instruction-tuning improves the
alignment between Large Language Models
(LLMs) and Small (SLMs) in both in-family and
out-family settings. These conclusions can be observed in Figure 2 and Figure 3, which reports
the downstream accuracies without tuning (see the
Baselines), with only the Instruction-tuning phase
on Demonstrations and after the Self-refine phase.
As discussed in Section 4.1, the models with only
Instruction-tuning on Demonstrations (generated
by LLMs) transfers the reasoning properties in a
marginal way (see Instruction-tuned in Figures 2).
However, although teacher-student alignment
via Instruction-tuning produces better students, an
improved alignment is achieved through the Selfrefine phase, as discussed in 4.2. In particular,
the ’Self-refine Instruction-tuning’ bars in Figure
2 show that the students self-refined outperformed
the students tuned only with Instruction-tuning
(’Instruction-tuning’ bars on Figure 2). Furthermore, the alignment via Demonstrations generated
by teachers outside the same family (out-family)
delivers more robust students (see Figure 3 the Selfrefine Instruction-tuning and (in-family) bars).
Finally, students models behind the self-refine
phase outperformed others in both in-domain
and out-domain tasks (discussed in Section 4.3).
Hence, the self-refine mechanism effectively aligns
teacher-student capabilities in out-domain tasks by
enhancing performance even in the presence of
fewer Demonstrations (Section 4.4).
**4.1** **The Instruction-tuning alignment**
Instruction-tuning led by Larger Language Models (teachers models), which are able to deliver
multi-step reasoned answers, induces this property
within Smaller Language Models (students models). This can be seen in the experiments in Figure
2, Figure 3 and additional evaluations in Appendix
I. The student models behind instruction-tuning on
demonstrations produced by teacher models outperformed the baselines of the proposed benchmarks.
While one can observe consistent improvements
-----
Figure 3: Accuracies (%) on benchmarks (Section 3.1) before Instruction-tuning ( Baseline CoT), behind first phase
performed on Demonstrations delivering CoT (i.e., Instruction-tuned (IT)) and finally behind the Self-refine phase
(i.e., Self-refine IT). In particular, the models were instructed via Demonstrations delivered by out-family LLMs (as
described in the legend, we use the notation method(Teacher->Student)).
in performance across the board, there are moderate variations across models and tasks. The teacher
models that generate Demonstrations stem from different families and perform differently, as shown in
Table 5. The consequence of this phenomenon can
be seen in Figure 2 and Figure 3 (horizontal lines
that are the reported performance of the teachers
and bars ’Instruction-tuning’ that are the performance of the students). Therefore, the teacherstudent alignment is not complete as there is a gap
between the performances of the teachers and the
students tuned via Instruction-tuning (only phase
presented in Section 2.1). In addition, it is possible
to differentiate between in-family and out-family
alignment. In the in-family, where students are
instructed with Demonstrations delivered by the
teachers of the same family, performances vary
from 6.3 points on average in question-answering
(QA) tasks and 8.2 points on average in math word
problems (MWP) tasks. Meanwhile, in the outfamily alignment, the performances vary by 8.5 on
the QA and 8.7 on the MWP.
Hence, to improve the alignment both in-family
and consistently out-family, we have proposed an
optimization technique based on a self-refinement
approach (introduced in Section 2.2), the results of
which we discuss in Section 4.2.
**4.2** **The Self-refine Impact**
The Self-refine process enables complete in-family
student-teacher alignment by consistently increasing performance in out-family settings and improv
ing the qualities of generated answers. The results
obtained in Figure 2 show that the students (SLMs
instructed with Self-refine Instruction-tuning) outperform the non-self-refined students and perform
comparably to their teachers. The same behaviour
can be observed from the out-family setting shown
in Figure 3. In particular, the teacher GPT-3.5
showed a more robust baseline performance (Table 5). Although Instruction-tuning alone transfers
some of the abilities to the student models, they
were significantly lower when compared to the outfamily teacher models. In contrast, the teacherstudent performances significantly converged after the self-refine phase, leading to the alignment
completion. Finally, a positive impact can also be
observed on the quality of students’ generations,
as shown in the additional experiment discussed in
Appendix H.
The performances appear completely aligned,
but the students were tested only for in-domain
tasks. The proposed approach could cause students
to over-specialize in in-domain tasks, running the
risk of losing the ability to solve out-domain tasks.
For this reason, we performed a set of assessments
evaluating students on in-domain and out-domain
tasks and discussed the results in Section 4.3.
**4.3** **In-Domain and Out-Domain**
The Self-refine Instruction-tuning approach complements student-teacher alignment and improves
students’ generalization abilities in out-domain
tasks. These results can be observed in Table 1 with
-----
|Col1|Col2|OBQA CSQA PIQA SIQA GMS8K MultiArith|
|---|---|---|
|Col1|Col2|OBQA CSQA PIQA SIQA GMS8K MultiArith|
|---|---|---|
|Baseline Baseline CoT|- -|53.6 ±.2 50.6 ±.4 61.6 ±.1 46.5 ±.3 68.2 ±.5 69.5 ±.2 49.5 ±.4 55.8 ±.3 63.8 ±.1 51.3 ±.5 71.3 ±.2 72.6 ±.4|
|OBQA|Instruction-tuning + Self-refine Cross Self-refine|65.3 ±.3 65.4 ±.2 66.3 ±.4 59.2 ±.2 61.4 ±.2 60.2 ±.3 70.8 ±.3 73.2 ±.2 75.3 ±.1 62.6 ±.3 68.7 ±.4 69.8 ±.3 - 78.4 ±.1 78.3 ±.5 64.5 ±.3 74.4 ±.4 83.2 ±.2|
|CSQA|Instruction-tuning + Self-refine Cross Self-refine|57.8 ±.1 71.4 ±.3 65.5 ±.4 61.8 ±.2 60.1 ±.5 59.3 ±.1 69.5 ±.5 79.8 ±.3 74.2 ±.1 66.3 ±.2 61.2 ±.3 60.3 ±.3 68.7 ±.4 - 78.4 ±.2 64.1 ±.3 72.1 ±.4 73.4 ±.2|
|PIQA|Instruction-tuning + Self-refine Cross Self-refine|56.9 ±.1 64.3 ±.2 80.2 ±.3 57.3 ±.3 58.3 ±.1 59.1 ±.3 68.2 ±.4 67.3 ±.5 84.6 ±.3 63.4 ±.2 67.8 ±.1 66.9 ±.3 68.2 ±.3 71.3 ±.3 - 64.2 ±.1 68.7 ±.4 67.6 ±.1|
|SIQA|Instruction-tuning + Self-refine Cross Self-refine|58.9 ±.2 62.8 ±.5 63.2 ±.1 62.8 ±.3 59.6 ±.1 60.2 ±.3 68.3 ±.3 68.5 ±.2 78.3 ±.3 66.2 ±.4 61.3 ±.5 60.9 ±.4 69.4 ±.2 68.5 ±.2 77.9 ±.3 - 65.1 ±.3 64.7 ±.2|
|GSM8K|Instruction-tuning + Self-refine Cross Self-refine|53.2 ±.4 54.9 ±.5 63.7 ±.1 52.5 ±.2 71.2 ±.3 70.3 ±.2 58.6 ±.3 61.7 ±.4 62.3 ±.2 52.4 ±.3 76.9 ±.1 74.3 ±.2 64.6 ±.5 64.3 ±.2 77.6 ±.4 60.3 ±.2 - 75.3 ±.3|
|MultiArith|Instruction-tuning + Self-refine Cross Self-refine|53.6 ±.2 55.7 ±.3 53.8 ±.3 51.5 ±.3 69.3 ±.1 75.6 ±.2 59.1 ±.2 63.2 ±.5 58.3 ±.3 58.6 ±.1 70.2 ±.4 85.8 ±.2 65.3 ±.4 61.3 ±.1 62.1 ±.2 60.7 ±.5 73.4 ±.3 -|
**Evaluated on**
Table 1: Evaluation of Llama-2-7 Instruction-tuned (Instruction-tuned) and with completely Self-refine
Instruction-tuning (+ Self-refine Instruction-tuned) on Demonstrations using different test sets. We evaluate
in-domain (QA vs QA) and out-domain (QA vs math-word problem) benchmarks. "Baselines" are referred to the
non-instructed model. Results colored in green indicate the in-domain benchmark, blue the out-domain benchmark, and orange the same benchmark on which perform the evaluation phase. Moreover, we propose Self-refine
Instruction-tuning in cross-setting scenario where we optimize the model on the training set related to the evaluated
task.
**4.4** **Low-resource Optimization**
Self-refine Instruction-tuning achieves sustainable
performances in low-resource settings. In fact, in
Figure 4, it is possible to observe that the performance achieved by the self-refined students consistently outperforms that of the non-self-refined
students (where only phase 1 described in Section
2.1 was performed) (technical details on the breakdown can be found in Appendix C.1). Although
it emerges that only the optimization process via
DPO is more performant than the instructiontuning process alone, the combination of the two
phases achieves the best results in both in-family
and out-family alignment in each proposed splitting
that are described in Appendix C.1.
**5** **Related Work**
**5.1** **Multi-step Reasoning**
Previous works focus on Chain-of-Thought (CoT)
prompting techniques, studying the impact of
prompting design and engineering, proposing specialized interventions to improve CoT generalization and fine-grained multi-step reasoning properties (Wei et al., 2022; Fu et al., 2023).
On the prompting design side, Gao et al. (2023)
Llama2-7 as students and Llama2-70 as teachers
(in Appendix Table 10 with Llama2-13 Table 11
with Mistral-7). In particular, behind the evaluations performed on in-domain and out-domain
tasks, the students Self-refine Instruction-tuned outperform the baselines and the Instruction-tuned
models. Furthermore, to observe the impact of
the optimization phase (introduced in Section 2.2)
on the downstream performance, we conducted a
further experiment by fixing the Instruction-tuning
phase and switching the Self-refine ones across
different evaluation tasks (e.g., we instructed a student on OBQA and then optimized via self-refine
approach on CSQA). As shown in lines Cross
Self-refine of Table 1, students warmed up on
tasks other than those they are optimized, outperformed the others, and obtained similar performances to those obtained from in-domain models.
This shows that optimization positively impacts the
alignment of generalization abilities in out-domain
tasks. Finally, following evaluations in out-domain
tasks and across scenarios, we evaluate the performance of the proposed approach by reducing the
number of demonstrations available for alignment
in Section 4.4.
-----
proposed using Python programs as a CoT prompt,
demonstrating more accurate reasoning steps and
significant improvements behind CoT prompting
(Wei et al., 2022). Zhou et al. (2023) introduced a
code generation approach to verify the intermediate
reasoning step (OpenAI, 2023).
In parallel, there have been improvements in the
accessibility of lower-parameter versions of Large
Language Models (LLMs), which we define as
Small Language Models (SLMs), on which previous CoT improvements cannot be fully observed
(Shridhar et al., 2023; Ho et al., 2023). Therefore, several works are emerging at this gap, aiming to transfer LLM reasoning properties to SLMs.
Pioneering proposals in this direction proposed
teacher-student alignment methods through a series
of approaches geared towards the distillation of the
knowledge generated by the teacher for the finetuning of the student (Li et al., 2023b; Magister
et al., 2023; Shridhar et al., 2023). Later, Yue et al.
(2023) proposed specialized Instruction-tuning using Alpaca-like style demonstrations (Taori et al.,
2023) specialized for mathematical tasks, while
Luo et al. (2023); Xu et al. (2023) proposed supervised fine-tuning reinforced with rewarding algorithms.
**5.2** **Reinforcement Learning (RL)**
A significant component that promotes the generative reasoning delivering CoT is provided by
refinement via RL methods. Recent work that applies Proximal Policy Optimization (PPO) (Schulman et al., 2017) for aligning human preferences
(Ouyang et al., 2022). Several methods have been
proposed to improve the efficiency of alignment
(Azar et al., 2023), including Direct Preference
Optimization (DPO) (Rafailov et al., 2023).
In this work, we adopt RL to refine performance
over conventional SFT. For mathematical problem
solving, Uesato et al. (2022) trained an outcome- or
process-based reward model to perform re-ranking
(Cobbe et al., 2021), achieving better performance
than SFT and majority voting (Wang et al., 2023b).
(Luong et al., 2024) adopted reinforcement learning as an extension of traditional supervised tuning.
We adopt DPO and automate the reward process in
a teacher-student context. We focus on the transfer
of CoT-style, step-wise reasoning and propose a refinement technique applied to models downstream
of the instruction-tuning phase.
**5.3** **Self-refined Instruction-tuning**
Complementing and enhancing foundational approaches (Magister et al., 2023; Uesato et al., 2022;
Li et al., 2023a; Ho et al., 2023), several papers
have been published simultaneously Wang et al.
(2023d); Luo et al. (2023); Wang et al. (2023a);
Paul et al. (2024); Luong et al. (2024); Ranaldi
and Freitas (2024) (Table 15 summarises the main
features). These works prove the effect of supervised fine-tuning to transfer the ability to produce
multi-step reasoned answers from larger to smaller
models, as described in Section 5.2. Our work goes
beyond the state-of-the-art by:
- proposing a method for aligning CoT abilities
by introducing Instruction-tuning via Demonstrations produced by answers generated by
different LLMs, decentralizing the unique
teacher model (in many cases GPT-3.5,4).
- analyzing the alignment performance between
in-family and out-family models on different
tasks related to commonsense and math reasoning, identifying crucial alignment factors
that arise between teachers and students.
- investigating the impact of teacher-student
alignment by adapting and promoting DPO
(Rafailov et al., 2023) as a cornerstone method
for eliminating performance gaps.
**6** **Conclusion**
This paper proposes a novel approach for aligning multi-step CoT reasoning between teacher
Large Language Models (LLMs) and student
Smaller LMs (SLMs). In particular, our Selfrefine Instruction-tuning is framed as an instruction tuning via Chain-of-Thought Demonstrations
method based on explanations delivered by LLMs
prompted by the CoT mechanism, which is then
reinforced via the Self-refine phase that uses Direct
Preference Optimization. We also contrast the impact of in-family and out-family alignment across
teacher and student models. The results highlight
the impact of teacher-student Instruction-tuning
interventions as a mechanism to improve the multiwise reasoning properties of smaller language models and promote the self-refinement abilities of instructed models to complete the alignment.
-----
**Limitations**
In this paper, we analyzed the impact of Answers
delivered by Large Language Models using them as
Demonstrations to reinforce the abilities of Small
Language Models. Although we proposed an extensive study, there are several limitations:
- only English-language prompting methods
and tasks are considered. The understanding
of these methods across different languages
still needs to be established.
- dependence on Large Language Models,
where the supporting training sets are not always fully known. Although the characteristics of the corpora are reported in the system
reports. Consequently, contextualising the differences in pre-training data between models
is not fully possible, where the analysis is constrained to observing the outputs in natural
language.
In conclusion, learning from and with Demonstrations carries some specific risks associated with
automation. Although a model may generalize its
predictions using a seemingly consistent series of
natural language steps, even if the prediction is
ultimately correct, there is no guarantee that the
predicted output comes from a process represented
by the generalization. A end-user might be overconfident in the model based on the CoT mechanism.
**Ethical Statement**
Although this research enhances the reasoning abilities of Smaller Language Models, they still need
to be made sufficiently robust to be applied within
more critical domains. Further safety and out-ofdistribution generalisation mechanisms needs to be
developed in tandem with the application of the
methods described in this paper, in order to establish the robustness of the described mechanisms.
**References**
Mohammad Gheshlaghi Azar, Mark Rowland, Bilal
Piot, Daniel Guo, Daniele Calandriello, Michal
[Valko, and Rémi Munos. 2023. A general theoret-](http://arxiv.org/abs/2310.12036)
[ical paradigm to understand learning from human](http://arxiv.org/abs/2310.12036)
[preferences.](http://arxiv.org/abs/2310.12036)
Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng
[Gao, and Yejin Choi. 2019. Piqa: Reasoning about](http://arxiv.org/abs/1911.11641)
[physical commonsense in natural language.](http://arxiv.org/abs/1911.11641)
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
[2020. Language models are few-shot learners.](http://arxiv.org/abs/2005.14165)
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg,
Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro,
[and Yi Zhang. 2023. Sparks of artificial general in-](http://arxiv.org/abs/2303.12712)
[telligence: Early experiments with gpt-4.](http://arxiv.org/abs/2303.12712)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](https://api.semanticscholar.org/CorpusID:239998651)
[lems. ArXiv, abs/2110.14168.](https://api.semanticscholar.org/CorpusID:239998651)
Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and
[Luke Zettlemoyer. 2023. Qlora: Efficient finetuning](http://arxiv.org/abs/2305.14314)
[of quantized llms.](http://arxiv.org/abs/2305.14314)
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
[Tushar Khot. 2023. Complexity-based prompting for](http://arxiv.org/abs/2210.00720)
[multi-step reasoning.](http://arxiv.org/abs/2210.00720)
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Gra[ham Neubig. 2023. Pal: Program-aided language](http://arxiv.org/abs/2211.10435)
[models.](http://arxiv.org/abs/2211.10435)
[Vedant Gaur and Nikunj Saunshi. 2023. Reasoning](https://doi.org/10.18653/v1/2023.findings-acl.364)
[in large language models through symbolic math](https://doi.org/10.18653/v1/2023.findings-acl.364)
[word problems. In Findings of the Association for](https://doi.org/10.18653/v1/2023.findings-acl.364)
Computational Linguistics: ACL 2023, pages 5889–
5903, Toronto, Canada. Association for Computational Linguistics.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou,
Mantas Mazeika, Dawn Song, and Jacob Steinhardt.
[2021a. Measuring massive multitask language under-](http://arxiv.org/abs/2009.03300)
[standing.](http://arxiv.org/abs/2009.03300)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
[Jacob Steinhardt. 2021b. Measuring mathematical](http://arxiv.org/abs/2103.03874)
[problem solving with the math dataset.](http://arxiv.org/abs/2103.03874)
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023.
[Large language models are reasoning teachers. In](https://doi.org/10.18653/v1/2023.acl-long.830)
Proceedings of the 61st Annual Meeting of the
Association for Computational Linguistics (Volume
1: Long Papers), pages 14852–14882, Toronto,
Canada. Association for Computational Linguistics.
Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
-----
de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lélio Renard Lavaud,
Marie-Anne Lachaux, Pierre Stock, Teven Le Scao,
Thibaut Lavril, Thomas Wang, Timothée Lacroix,
[and William El Sayed. 2023. Mistral 7b.](http://arxiv.org/abs/2310.06825)
Albert Q. Jiang, Alexandre Sablayrolles, Antoine
Roux, Arthur Mensch, Blanche Savary, Chris
Bamford, Devendra Singh Chaplot, Diego de las
Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, MarieAnne Lachaux, Pierre Stock, Sandeep Subramanian,
Sophia Yang, Szymon Antoniak, Teven Le Scao,
Théophile Gervet, Thibaut Lavril, Thomas Wang,
[Timothée Lacroix, and William El Sayed. 2024. Mix-](http://arxiv.org/abs/2401.04088)
[tral of experts.](http://arxiv.org/abs/2401.04088)
Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, and Yejin Choi. 2023a.
[Symbolic chain-of-thought distillation: Small mod-](https://doi.org/10.18653/v1/2023.acl-long.150)
[els can also “think” step-by-step. In Proceedings](https://doi.org/10.18653/v1/2023.acl-long.150)
of the 61st Annual Meeting of the Association
for Computational Linguistics (Volume 1: Long
Papers), pages 2665–2679, Toronto, Canada. Association for Computational Linguistics.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2023b. Making](https://doi.org/10.18653/v1/2023.acl-long.291)
[language models better reasoners with step-aware](https://doi.org/10.18653/v1/2023.acl-long.291)
[verifier. In Proceedings of the 61st Annual Meeting](https://doi.org/10.18653/v1/2023.acl-long.291)
of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 5315–5333,
Toronto, Canada. Association for Computational Linguistics.
Hanmeng Liu, Ruoxi Ning, Zhiyang Teng, Jian Liu, Qiji
[Zhou, and Yue Zhang. 2023. Evaluating the logical](http://arxiv.org/abs/2304.03439)
[reasoning ability of chatgpt and gpt-4.](http://arxiv.org/abs/2304.03439)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wiz-](http://arxiv.org/abs/2308.09583)
[ardmath: Empowering mathematical reasoning for](http://arxiv.org/abs/2308.09583)
[large language models via reinforced evol-instruct.](http://arxiv.org/abs/2308.09583)
Trung Quoc Luong, Xinbo Zhang, Zhanming Jie, Peng
[Sun, Xiaoran Jin, and Hang Li. 2024. Reft: Reason-](http://arxiv.org/abs/2401.08967)
[ing with reinforced fine-tuning.](http://arxiv.org/abs/2401.08967)
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2023.
[Teaching small language models to reason.](https://doi.org/10.18653/v1/2023.acl-short.151) In
Proceedings of the 61st Annual Meeting of the
Association for Computational Linguistics (Volume
2: Short Papers), pages 1773–1781, Toronto, Canada.
Association for Computational Linguistics.
Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish
[Sabharwal. 2018. Can a suit of armor conduct elec-](http://arxiv.org/abs/1809.02789)
[tricity? a new dataset for open book question answer-](http://arxiv.org/abs/1809.02789)
[ing.](http://arxiv.org/abs/1809.02789)
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, John
Schulman, Jacob Hilton, Fraser Kelton, Luke Miller,
Maddie Simens, Amanda Askell, Peter Welinder,
Paul Christiano, Jan Leike, and Ryan Lowe. 2022.
[Training language models to follow instructions with](http://arxiv.org/abs/2203.02155)
[human feedback.](http://arxiv.org/abs/2203.02155)
Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi
[Faltings. 2024. Refiner: Reasoning feedback on in-](http://arxiv.org/abs/2304.01904)
[termediate representations.](http://arxiv.org/abs/2304.01904)
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano
Ermon, Christopher D. Manning, and Chelsea Finn.
[2023. Direct preference optimization: Your language](http://arxiv.org/abs/2305.18290)
[model is secretly a reward model.](http://arxiv.org/abs/2305.18290)
Leonardo Ranaldi and Andre Freitas. 2024. [Align-](https://aclanthology.org/2024.eacl-long.109)
[ing large and small language models via chain-](https://aclanthology.org/2024.eacl-long.109)
[of-thought reasoning.](https://aclanthology.org/2024.eacl-long.109) In Proceedings of the
18th Conference of the European Chapter of the
Association for Computational Linguistics (Volume
1: Long Papers), pages 1812–1827, St. Julian’s,
Malta. Association for Computational Linguistics.
Subhro Roy and Dan Roth. 2015. [Solving general](https://doi.org/10.18653/v1/D15-1202)
[arithmetic word problems. In Proceedings of the](https://doi.org/10.18653/v1/D15-1202)
2015 Conference on Empirical Methods in Natural
Language Processing, pages 1743–1752, Lisbon,
Portugal. Association for Computational Linguistics.
Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. [Social](https://doi.org/10.18653/v1/D19-1454)
[IQa: Commonsense reasoning about social interac-](https://doi.org/10.18653/v1/D19-1454)
[tions.](https://doi.org/10.18653/v1/D19-1454) In Proceedings of the 2019 Conference on
Empirical Methods in Natural Language Processing
and the 9th International Joint Conference on
Natural Language Processing (EMNLP-IJCNLP),
pages 4463–4473, Hong Kong, China. Association
for Computational Linguistics.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec
[Radford, and Oleg Klimov. 2017. Proximal policy](http://arxiv.org/abs/1707.06347)
[optimization algorithms.](http://arxiv.org/abs/1707.06347)
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya
Sachan. 2023. [Distilling reasoning capabilities](https://doi.org/10.18653/v1/2023.findings-acl.441)
[into smaller language models. In Findings of the](https://doi.org/10.18653/v1/2023.findings-acl.441)
Association for Computational Linguistics: ACL
2023, pages 7059–7073, Toronto, Canada. Association for Computational Linguistics.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
[Jonathan Berant. 2019. CommonsenseQA: A ques-](https://doi.org/10.18653/v1/N19-1421)
[tion answering challenge targeting commonsense](https://doi.org/10.18653/v1/N19-1421)
[knowledge. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1421)
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann
Dubois, Xuechen Li, Carlos Guestrin, Percy Liang,
-----
and Tatsunori B. Hashimoto. 2023. Stanford alpaca:
An instruction-following llama model. [https://](https://github.com/tatsu-lab/stanford_alpaca)
[github.com/tatsu-lab/stanford_alpaca.](https://github.com/tatsu-lab/stanford_alpaca)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023. Llama 2: Open foundation and fine-](http://arxiv.org/abs/2307.09288)
[tuned chat models.](http://arxiv.org/abs/2307.09288)
Jonathan Uesato, Nate Kushman, Ramana Kumar, Francis Song, Noah Siegel, Lisa Wang, Antonia Creswell,
[Geoffrey Irving, and Irina Higgins. 2022. Solving](http://arxiv.org/abs/2211.14275)
[math word problems with process- and outcome-](http://arxiv.org/abs/2211.14275)
[based feedback.](http://arxiv.org/abs/2211.14275)
Peiyi Wang, Lei Li, Liang Chen, Feifan Song, Binghuai
Lin, Yunbo Cao, Tianyu Liu, and Zhifang Sui. 2023a.
[Making large language models better reasoners with](http://arxiv.org/abs/2309.02144)
[alignment.](http://arxiv.org/abs/2309.02144)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc
Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery,
[and Denny Zhou. 2023b. Self-consistency improves](http://arxiv.org/abs/2203.11171)
[chain of thought reasoning in language models.](http://arxiv.org/abs/2203.11171)
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023c. [Self-instruct: Aligning](https://doi.org/10.18653/v1/2023.acl-long.754)
[language models with self-generated instructions.](https://doi.org/10.18653/v1/2023.acl-long.754)
In Proceedings of the 61st Annual Meeting of the
Association for Computational Linguistics (Volume
1: Long Papers), pages 13484–13508, Toronto,
Canada. Association for Computational Linguistics.
Zhaoyang Wang, Shaohan Huang, Yuxuan Liu, Jiahai Wang, Minghui Song, Zihan Zhang, Haizhen
Huang, Furu Wei, Weiwei Deng, Feng Sun, and
Qi Zhang. 2023d. [Democratizing reasoning abil-](https://doi.org/10.18653/v1/2023.emnlp-main.120)
[ity: Tailored learning from large language model. In](https://doi.org/10.18653/v1/2023.emnlp-main.120)
Proceedings of the 2023 Conference on Empirical
Methods in Natural Language Processing, pages
1948–1966, Singapore. Association for Computational Linguistics.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, Ed H.
Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy
[Liang, Jeff Dean, and William Fedus. 2022. Emer-](http://arxiv.org/abs/2206.07682)
[gent abilities of large language models.](http://arxiv.org/abs/2206.07682)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and
[Denny Zhou. 2023. Chain-of-thought prompting elic-](http://arxiv.org/abs/2201.11903)
[its reasoning in large language models.](http://arxiv.org/abs/2201.11903)
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng,
Pu Zhao, Jiazhan Feng, Chongyang Tao, and Daxin
[Jiang. 2023. Wizardlm: Empowering large language](http://arxiv.org/abs/2304.12244)
[models to follow complex instructions.](http://arxiv.org/abs/2304.12244)
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao
Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023.
[Mammoth: Building math generalist models through](http://arxiv.org/abs/2309.05653)
[hybrid instruction tuning.](http://arxiv.org/abs/2309.05653)
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D.
[Goodman. 2022. Star: Bootstrapping reasoning with](http://arxiv.org/abs/2203.14465)
[reasoning.](http://arxiv.org/abs/2203.14465)
Mengxue Zhang, Zichao Wang, Zhichao Yang, Weiqi
[Feng, and Andrew Lan. 2023. Interpretable math](http://arxiv.org/abs/2306.00784)
[word problem solution generation via step-by-step](http://arxiv.org/abs/2306.00784)
[planning.](http://arxiv.org/abs/2306.00784)
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun
Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi Song,
[Mingjie Zhan, and Hongsheng Li. 2023. Solving](http://arxiv.org/abs/2308.07921)
[challenging math word problems using gpt-4 code](http://arxiv.org/abs/2308.07921)
[interpreter with code-based self-verification.](http://arxiv.org/abs/2308.07921)
-----
**A** **Instruction-tuning**
The Instruction-tuning proposed in our contribution follows the pipeline proposed in (Ranaldi and Freitas,
2024) to achieving teacher-student alignment comprises two steps: annotation and knowledge transfer.
In the annotation phase, Large Language Models (teachers) are prompted with questions (see Table
3). The answers are collated and form the Demonstrations (see Table 6). They then move on to the
Instruction-tuning phase, conducted using what was proposed in (Taori et al., 2023). In particular, the
Demonstrations are constructed with triples formed by the instruction (a pattern to guide the generation
related to the task), the input, which is the question related to the mathematical problem or the desired
question, and the output the prompted LLM generated. Note that instruction and input can oftentimes be
concatenated, but this depends on the basic configurations of the patterns and the type of task to be solved.
The instruction-tuning process, a specialization of task-oriented fine-tuning, is similar to the latter and can
be described in Section 2.1.
**B** **Self-refine Instruction-tuning**
In order to refine Small Language Models (students) instructed via Demonstrations delivered by Large
Language Models (teachers) we propose the Self-refine phase (introduced in Section 2.2). In particular,
this is based on a variant of the DPO optimization algorithm (Rafailov et al., 2023).
Starting from the Demonstrations defined as D = (ii, qi, ai) where i ∈D (note that ai are generated
using CoT prompt as showed in Appendix D), we prompt the students using the input xi = ii + q ( and
_xˆi = xi+ "Let’s think step by step") ∀i ∈D._
Hence, for each element in Demonstrations, we collect the Answers (yi = πinst(xi)) that are the answers
generated by the student given the input xi, and the CoT-Answers (yˆCoT = πinst(ˆxi)) are the answers
that deliver CoT generated by the student elicited via CoT mechanism ˆxi.
Hence, we introduce:
- Oracle or Target ti that is the target answer given the input xi.
- Demonstration Answer ˆai and ai: that are target answer given the input xi or ˆxi.
- Answer yi = πinst(x): is the answer generated by the student given the input x (without CoT
prompt).
- CoT Answer yCoT = πinst(xCoT ): is the answer that delivers CoT generated by the student elicited
via CoT mechanism xCoT .
In the following lines, we formalize the structuring of DPOCoT, DPOanswer and other configurations.
**DPOCoT** We propose DPOCoT where the answers that deliver correct CoT are referred to as the preferred
response, while the others are the answers without CoT defined as:
DPOCoT (πθ; πinst) = E(xCoT,yw,yl) _D [log σ(M_ (xCoT, yw, yl))] (3)
_L_ _−_ _∼_
Where LDPOCoT (πtheta; πtextinst) the same LDPO introduced in Section 2.2 but in particular to elicit preferred generations the yw and yl components are defined as follows, ∀i ∈D :
_yˆi_ if ti ∈ _yˆi_ (4)
_aˆi_
_yw =_
while the discouraged answers are yl that are yi ∀i ∈D.
**DPOanswer** In contrast, we propose DPOanswer and where the answers without CoT are referred to as the
preferred.
_LDPOanswer_ (πθ; πinst) = −E(x,yp,yCoT )∼D [log σ(M (x, yp, yCoT ))] (5)
However, since our contribution is focused on CoT in the main work, we only consider DPOCoT . In the
Table 4 we have reported DPO results
-----
**C** **Experimental Details**
**C.1** **Data Splitting**
In order to observe the impact of the Demonstrations, we produced a series of experiments by systematically decreasing the Instruction-tuning data. In particular, we chose three sub-sets with 75%, 50%, and
25% from the total number of demonstrations. In detail, the Self-refine Instruction phases on the number
of equal Demonstrations are performed by taking about 3000 examples in splitting 100%, 2250 in splitting
50%, 1500 in splitting 50%, and 750 in splitting 25%. We chose the value 3000 because it has the smallest
CoT Demonstrations available. For the total Demonstrations, we selected random samples. Using these
splitting, we performed the evaluations incrementally as the demonstrations used to do Instruction-tuning,
to do Self-refine, and to do Self-refine Instruction-tuning.
**C.2** **Parameters**
The annotation phase that the Teachers performed was done on the training set. The evaluation phase of
both the basic models and the Students and the Teachers was done on the test splitting. The evaluation,
described in Section 3.3, was done with question probing and string matching of the generated answers.
More specifically:
**Teachers** We performed the annotation phase for each benchmark by delivering to GPT-3.5-turbo,
Mixtral7x8 and Llama-2-70-chat the prompts structured as shown in Table 2 and Table 3 (customized
for each benchmark). We set the temperatures to 0.7 for GPT-3.5-turbo and 0.1 for Llama-2-70-chat as
recommended in technical reports. Moreover, we kept all the other parameters as default. All parameters
are shown in our code .
**Baseline & Students** We evaluated the performance of the Small Language Models (Llama-2-7-chat,
Llama-2-13-chat, Mistral-7b) by prompting them with the same format used for the Teachers. For
both the baselines and the instructed models, we set the temperature to 0.1 and kept all the other parameters
as default. The evaluation pipelines and generation parameters are available in our code.
Figure 4: Acciracies (%) on the test set of benchmarks. The Self-refine Instruction-tuning performed on different
splits (see Appendix C.1 for major details).
-----
**Prompting Approaches**
_Prompt for task: GSM8k, MultiArith_
**Answer** **the** **following** **mathematical**
**question with numerical solution.**
**Question: <Question>**
**Answer:**
_Prompt for task: OBQA, CSQA, PIQA, SIQA_
**Choose the answer to the question only**
**from options A, B, C, [...].**
**Question: <Question>**
**Choices:**
A) <Option1>
B) <Option2>
C) <Option3>
....
**Answer:**
Table 2: Example of input-prompt for multiple-choices (left) and mathematical (right) question-answering benchmarks.
_Prompt for task: GSM8k, MultiArith_
**Answer** **the** **following** **mathematical**
**question with numerical solution.**
**Question: <Question>**
**Answer: Let’s think step by step**
_Prompt for task: OBQA, CSQA, PIQA, SIQA_
**Choose the answer to the question only**
**from options A, B, C, [...].**
**Question: <Question>**
**Choices:**
A) <Option1>
B) <Option2>
C) <Option3>
....
**Answer: Let’s think step by step**
Table 3: Example Zero-shot CoT of input-prompt for multiple-choices (left) and mathematical (right) questionanswering benchmarks (approach used in this work).
**E** **Models**
|Model|Version|
|---|---|
|Llama-2-7-chat Llama-2-13-chat Llama-2-70-chat Mistral-7 Mixtral7x8|meta-llama/Llama-2-7b meta-llama/Llama-2-13b meta-llama/Llama-2-70b mistralai/Mistral-7B-Instruct-v0.1 mistralai/Mixtral-8x7B-v0.1|
Table 4: List and specific versions of the models proposed in this work, which can be found on huggingface.co.
For each model we used all the default configurations proposed in the repositories.
-----
**F** **Accuracy of LLMs on different Benchhmark**
|Benchmarks Llama-2-70 Baseline CoT|GPT-3.5 Mixtral7x8 Baseline CoT Baseline CoT|
|---|---|
|Training|Col2|Col3|Col4|
|---|---|---|---|
|Training|Col2|Col3|Col4|
|---|---|---|---|
|OpenBook QA CommonSesnse QA|65.6 71.3 ±.3 ±.1 74.2 79.6 ±.1 ±.3|66.2 75.4 ±.2 ±.4 79.3 84.8 ±.4 ±.1|77.9 81.2 ±.3 ±.1 78.2 82.3 ±.2 ±.3|
|Social Interaction QA Physical Interaction QA|65.4 67.5 ±.2 ±.3 82.6 85.8 ±.2 ±.2±.3|67.6 70.3 ±.5 ±.4 83.5 85.3 ±.3 ±.1|65.5 68.2 ±.2 ±.3 80.2 84.1 ±.3 ±.3|
|GSM8K MultiArith|74.6 77.2 ±.1 ±.2 88.6 90.8 ±.4 ±.3|83.2 86.5 ±.2 ±.1 94.9 96.7 ±.4 ±.1|65.6 67.9 ±.4 ±.2 89.3 91.5 ±.1 ±.4|
|Testing|Col2|Col3|Col4|
|---|---|---|---|
|Testing|Col2|Col3|Col4|
|---|---|---|---|
|OpenBook QA CommonSesnse QA|65.9 70.8 ±.2 ±.1 73.4 81.8 ±.2 ±.3|67.8 74.6 ±.1 ±.4 80.2 83.7 ±.2 ±.1|78.4 84.6 ±.3 ±.2 77.6 81.5 ±.3 ±.1|
|Social Interaction QA Physical Interaction QA|64.2 66.9 ±.2 ±.4 82.6 85.6 ±.3 ±.5|66.9 71.3 ±.3 84.3 85.8 ±.2 ±.5|64.3 65.4 ±.3 ±.2 79.9 83.5 ±.3 ±.1|
|GSM8K MultiArith|75.2 77.8 ±.5 ±.5 89.2 92.3 ±.1 ±.2|82.8 84.6 ±.2 ±.4 95.6 97.4 ±.2 ±3|63.4 62.8 ±.3 ±.5 88.9 90.2 ±.1 ±.3|
Table 5: Accuracy (%) of Llama-2-70, GPT-3.5 and Mixtral7x8 (teachers) on training and testing data with CoT
prompt (CoT) and with the standard prompt (Baseline).
**Instruction**
Choose the answer to the question only from options A, B, C, D.
**Input**
Question: Which form can be made into a triangle?
Choices:
A) ice
B) steam
C) water
D) salt water
Answer: Let’s think step-by-step
**Output (that is the Answer produced by teacher model)**
Answer:
A) ice: Ice is a solid form of water. In solid form, substances can be molded or shaped into
various forms, including triangles. If you had a block of ice, you could carve or mold it
into a triangular shape.
B) steam: Steam is the gaseous form of water. Being a gas, it doesn’t have a fixed shape. You
can’t shape steam into a triangle without containing it within some form of solid boundary,
and even then, the steam itself isn’t triangular - the container is.
C) water: Water is a liquid that takes the shape of its container. You can pour water into
a triangular container, and while the volume inside would be triangular, the water itself
isn’t inherently triangular.
D) salt water: This is essentially the same as option C, but with salt dissolved in it. Being
a liquid, it takes the shape of its container.
Given these considerations, the only substance among the options that can be shaped into a
triangle by itself is A) Ice.
Table 6: Example of Demonstration produced by the GPT-3.5 answer on an Open Book Question Answering
benchmark instance. The structure is composed by: Instruction, Input and Output.
-----
**G** **Description of proposed Benchmark**
|Dataset|Example|
|---|---|
|Open Book Question Answering (OBQA) (Mihaylov et al., 2018)|When birds migrate south for the winter, they do it because A) they are genetically called to. B) their children ask them to. C) it is important to their happiness. D) they decide to each.|
|Common Sense Question Answering (CSQA) (Talmor et al., 2019)|Aside from water and nourishment what does your dog need? A) bone. B) charm. C) petted. D) lots of attention. E) walked.|
|Physical Interaction Question Answering (PIQA) (Bisk et al., 2019)|How do you attach toilet paper to a glass jar? A) Press a piece of double-sided tape to the glass jar and then press the toilet paper onto the tape. B) Spread mayonnaise all over the jar with your palms and then roll the jar in toilet paper.|
|Social Interaction Question Answering (SIQA) (Sap et al., 2019)|Taylor gave help to a friend who was having trouble keeping up with their bills. What will their friend want to do next? A) Help the friend find a higher paying job. B) Thank Taylor for the generosity. C) pay some of their late employees.|
|(GSM8K) (Cobbe et al., 2021)|Tina makes $18.00 an hour. If she works more than 8 hours per shift, she is eligible for overtime, which is paid by your wage + 1/2 your hourly hourly wage. If she works 10 hours every day for 5 days, how much money does she make?|
|(MultiArith) (Roy and Roth, 2015)|Chloe was playing a video game where she scores 9 points for each treasure she finds. If she found 6 treasures on the first level and 3 on the second, what would her score be?|
Table 7: Examples of the benchmarks used in this paper.
|OBQA CSQA PIQA SIQA|GSM8K MultiArith|
|---|---|
|classes 4 5 2 3|- -|
|---|---|
|Training # examples for 1000 800 2000 1330 each class|4000 420|
|---|---|
|Test # examples for 125∗ 235∗ 924∗ 640∗ each class (± 8) (± 11) (± 18) (± 19)|1318 180|
|---|---|
|||
Table 8: Characteristics Training and Test set of benchmarks proposed in Section 3.1. The * indicates that the
number of examples are not perfect balanced, but the difference from the average is marginal. GMS8K e MultiArith
are not closed-ended question answering; they only have a question and a numerical solution.
|Name|Repository|
|---|---|
|CommonSenseQA (Talmor et al., 2019) OpenBookQA (Mihaylov et al., 2018) StrategyQA ()|huggingface.co/datasets/commonsense_qa huggingface.co/datasets/openbookqa huggingface.co/datasets/voidful/StrategyQA|
|PIQA (Bisk et al., 2019) SIQA (Sap et al., 2019)|huggingface.co/datasets/piqa huggingface.co/datasets/social_i_qa|
|GSM8K (Cobbe et al., 2021) MultiArith (Roy and Roth, 2015)|huggingface.co/datasets/gsm8k huggingface.co/datasets/ChilleD/MultiArith|
Table 9: In this table, we list the versions of the benchmark proposed in this work, which can be found on
huggingface.co.
-----
|Col1|Col2|OBQA CSQA PIQA SIQA GMS8K MultiArith|
|---|---|---|
|Col1|Col2|OBQA CSQA PIQA SIQA GMS8K MultiArith|
|---|---|---|
|Baseline Baseline CoT|- -|55.4 ±.2 63.4 ±.3 66.4 ±.2 48.3 ±.2 65.6 ±.4 63.4 ±.2 54.2 ±.2 62.8 ±.4 71.2 ±.3 46.9 ±.5 70.5 ±.1 62.8 ±.2|
|OBQA|Instruction-tuning + Self-refine Cross Self-refine|68.5 ±.4 67.5 ±.3 69.4 ±.1 60.1 ±.2 62.3 ±.4 61.5 ±.5 71.2 ±.4 74.1 ±.2 76.2 ±.3 63.4 ±.3 69.9 ±.4 70.7 ±.2 - 79.2 ±.1 79.5 ±.2 65.6 ±.3 75.2 ±.4 84.3 ±.5|
|CSQA|Instruction-tuning + Self-refine Cross Self-refine|58.4 ±.4 77.5 ±.2 66.4 ±.2 61.8 ±.3 62.4 ±.4 60.2 ±.2 69.5 ±.5 81.4 ±.2 74.2 ±.5 67.9 ±.1 62.1 ±.3 61.4 ±.4 70.2 ±.4 - 79.5 ±.3 65.2 ±.1 73.3 ±.3 75.3 ±.5|
|PIQA|Instruction-tuning + Self-refine Cross Self-refine|57.8 ±.2 65.2 ±.3 81.9 ±.4 58.5 ±.4 59.2 ±.4 60.3 ±.3 69.6 ±.2 68.2 ±.4 85.1 ±.5 64.3 ±.1 69.3 ±.2 68.1 ±.3 69.9 ±.1 71.3 ±.1 - 65.3 ±.1 69.6 ±.4 69.2 ±.2|
|SIQA|Instruction-tuning + Self-refine Cross Self-refine|59.6 ±.1 63.9 ±.4 67.1 ±.2 64.5 ±.3 60.3 ±.4 61.3 ±.2 69.2 ±.2 69.4 ±.1 79.2 ±.4 66.7 ±.3 62.4 ±.4 61.8 ±.2 71.2 ±.2 69.2 ±.1 80.4 ±.2 - 66.5 ±.1 66.7 ±.2|
|GSM8K|Instruction-tuning + Self-refine Cross Self-refine|54.3 ±.2 55.8 ±.3 64.3 ±.4 53.2 ±.3 72.3 ±.3 71.6 ±.2 59.3 ±.4 62.2 ±.2 63.5 ±.3 53.5 ±.5 77.2 ±.4 75.2 ±.3 65.7 ±.1 65.2 ±.5 78.1 ±.3 61.6 ±.4 - 76.2 ±.2|
|MultiArith|Instruction-tuning + Self-refine Cross Self-refine|54.7 ±.2 56.6 ±.3 54.5 ±.3 52.4 ±.3 70.2 ±.1 75.8 ±.2 60.3 ±.2 64.1 ±.4 59.4 ±.3 59.7 ±.1 72.1 ±.4 86.2 ±.3 66.2 ±.3 62.4 ±.1 63.2 ±.3 61.5 ±.4 73.9 ±.2 -|
**Evaluated on**
Table 10: Evaluation of Llama-2-13 Instruction-tuned (Instruction-tuned) and with completely Self-refine
Instruction-tuning (+ Self-refine Instruction-tuned) on Demonstrations using different test sets. We evaluate
in-domain (QA vs QA) and out-domain (QA vs math-word problem) benchmarks. "Baselines" are referred to the
non-instructed model. Results colored in green indicate the in-domain benchmark, blue the out-domain benchmark,
and orange the same benchmark on which the evaluation phase is performed. Moreover, we propose Self-refine
Instruction-tuning in cross-setting scenarios where we optimize the model on the training set related to the evaluated
task.
|Col1|Col2|OBQA CSQA PIQA SIQA GMS8K MultiArith|
|---|---|---|
|Col1|Col2|OBQA CSQA PIQA SIQA GMS8K MultiArith|
|---|---|---|
|Baseline Baseline CoT|- -|62.7 ±.3 69.2 ±.4 67.3 ±.1 55.3 ±.2 54.2 ±.2 88.4 ±.1 60.4 ±.3 68.7 ±.2 66.1 ±.2 54.8 ±.4 55.6 ±.3 87.3 ±.2|
|OBQA|Instruction-tuning + Self-refine Cross Self-refine|78.3 ±.2 65.4 ±.2 67.2 ±.3 59.2 ±.1 64.2 ±.2 62.1 ±.3 87.6 ±.2 73.1 ±.2 76.1 ±.1 63.3 ±.3 69.1 ±.4 70.7 ±.3 - 79.4 ±.1 80.1 ±.2 68.2 ±.4 75.2 ±.4 81.3 ±.1|
|CSQA|Instruction-tuning + Self-refine Cross Self-refine|58.9 ±.1 73.1 ±.4 65.8 ±.2 62.1 ±.1 62.2 ±.3 60.2 ±.2 69.5 ±.5 81.3 ±.1 75.1 ±.1 66.5 ±.2 61.1 ±.4 62.4 ±.1 69.2 ±.2 - 79.3 ±.1 65.2 ±.4 72.8 ±.4 74.4 ±.2|
|PIQA|Instruction-tuning + Self-refine Cross Self-refine|58.6 ±.2 64.8 ±.2 81.6 ±.2 59.2 ±.4 60.2 ±.2 60.3 ±.4 68.2 ±.4 68.2 ±.5 85.6 ±.2 63.8 ±.2 67.9 ±.2 67.2 ±.4 69.2 ±.3 71.9 ±.3 - 63.2 ±.1 68.4 ±.5 69.6 ±.1|
|SIQA|Instruction-tuning + Self-refine Cross Self-refine|59.3 ±.2 66.8 ±.2 63.2 ±.4 61.5 ±.2 60.2 ±.1 61.3 ±.3 68.3 ±.3 68.5 ±.2 78.3 ±.3 65.8 ±.4 62.4 ±.5 61.3 ±.4 71.3 ±.4 69.2 ±.2 78.1 ±.2 - 65.6 ±.3 68.3 ±.1|
|GSM8K|Instruction-tuning + Self-refine Cross Self-refine|52.4 ±.1 54.9 ±.5 58.7 ±.1 51.8 ±.3 56.1 ±.1 65.2 ±. 57.6 ±.3 58.7 ±.4 59.3 ±.2 51.4 ±.2 63.4 ±.1 60.3 ±.1 61.3 ±.5 64.3 ±.2 70.1 ±.4 58.2 ±.1 - 70.5 ±.3|
|MultiArith|Instruction-tuning + Self-refine Cross Self-refine|57.9 ±.2 59.2 ±.3 53.8 ±.4 51.5 ±.3 69.3 ±.2 89.6 ±.4 59.1 ±.2 63.2 ±.4 59.4 ±.5 59.9 ±.2 68.2 ±.1 91.4 ±.3 64.7 ±.4 65.8 ±.2 64.1 ±.4 61.5 ±.4 70.1 ±.3 -|
**Evaluated on**
Table 11: Evaluation of Mistral-7 Instruction-tuned (Instruction-tuned) and with completely Self-refine
Instruction-tuning (+ Self-refine Instruction-tuned) on Demonstrations using different test sets. We evaluate
in-domain (QA vs QA) and out-domain (QA vs math-word problem) benchmarks. "Baselines" are referred to the
non-instructed model. Results colored in green indicate the in-domain benchmark, blue the out-domain benchmark, and orange the same benchmark on which perform the evaluation phase. Moreover, we propose Self-refine
Instruction-tuning in cross-setting scenario where we optimize the model on the training set related to the evaluated
task.
-----
**H** **Quality of Generations**
To demonstrate the quality of the demonstrations generated by the teachers and students, we propose
annotating the responses provided by the teacher and student models automatically. In particular, we
sampled 300 questions (50 questions for each task from the testing set split). Hence, we systematically
prompt both the teacher LLMs and students. Finally, we estimated the quality of the responses generated
by systematically prompting a judge LLM (we chose GPT-4 as it is not among the models used in this
work).
Please act as an impartial judge and evaluate the quality of the response
provided by an AI assistant to the user instruction displayed below. Your
evaluation should consider factors such as quality, accuracy, depth, and
level of detail. Begin your assessment with a short explanation. Be as
objective as possible. After providing your explanation, please rate the
response on a scale of 1 to 3 strictly following this format:“[[rating]]”,
for example: “Rating: [[2]]”.
[question]
${question}
[AI assistant’s response]
${response}
Table 12: Using this prompt, we systematically query GPT-4 to note the answers’ quality.
|Model|Llama2-70b|Mixtral8x7b|GPT-3.5|
|---|---|---|---|
|Baseline Baseline CoT Target Answers|1.63 2.72 1|1.34 2.56 1|1.68 2.89 1|
Table 13: Averages quality scores obtained by LLMs’ answers by using GPT-4 as judge (see Table H).
|Col1|Model|Llama2-7b|Llama2-13b|Mistral-7b|
|---|---|---|---|---|
||Baseline Baseline CoT|1.26 1.47|1.39 1.56|1.16 1.21|
|in-family|Instruction-tuning Self-refine Instruction-tuning|2.43 2.75|2.66 2.83|2.36 2.54|
|out-family (GPT-3.5)|Instruction-tuning Self-refine Instruction-tuning|1.99 2.86|2.17 2.79|1.76 2.82|
Table 14: Averages quality scores obtained by students’ answers by using GPT-4 as judge (see Table H).
-----
|work|approach|teacher/s|students/s|tasks|
|---|---|---|---|---|
|(Zelikman et al., 2022)|Self-SFT|-|GPT-J, LaMDA|GSM8k, CSQA|
|(Magister et al., 2023)|SFT|PaLM GPT-3.5|T5-small, -medium T5-large, -xxl|GSM8k, StrategyQA, MArith|
|(Li et al., 2023a)|SFT|GPT-3 175B|OPT-1.3b|CSQA, OBQA, QARel|
|(Shridhar et al., 2023)|SFT|GPT-3 175B|GPT-2|GSM8k, StrategyQA SVAMP|
|(Ho et al., 2023)|SFT|InstructGPT (text-davinci-002)|GPT-3 (ada,babbage,curie)|GSM8k, StrategyQA, MArith, SVAMP, AddSub|
|(Wang et al., 2023d)|IT+RL|GPT-3|GPT-J|GSM8K, MultiArith, SVAMP CSQA, StrategyQA|
|(Luong et al., 2024)|SFT+RL|GPT-3.5|Galactica, CodeLlama|GSM8k SVAMP MathQA|
|(Ranaldi and Freitas, 2024)|IT|GPT-3.5, Llama2-70|Llama2-7,13, Mistral-7|GSM8k, PIQA, MathQA CSQA, OBQA, SIQA|
|(Wang et al., 2023a)|SFT+RL|GPT-3.5|Llama2-7,13|GSM8k, EAQA|
|(Paul et al., 2024)|SFT|GPT-3.5|CodeT5 s,m|GSM8k, SVAMP, MArith|
|Ours|IT+RL (DPO) (in-family vs out-family)|GPT-3.5, Llama2-70 Mixtral8x7|Llama2-7,Llama2-13, Mistral-7|GSM8k, CSQA, OBQA PIQA, SIQA, MArith MATH, MMLU|
Table 15: Summary of methods, teacher and student models of previous work, we indicate Supervised Fine-tuning
as (SFT), Instruction-tuning as (IT), and Reinforcement Learning (RL). *note that previous works do not use DPO
(Rafailov et al., 2023)
**I** **Additional Evaluations**
Figure 5: Accuracies (%) additional benchmarks as described in Section 3.1. Applying the same pipeline proposed
in Section 2 and the same experimental set-up (Section 3) as the experiments shown in Figure 2 and Figure 3. In
this experiment, we showed that the approach proposed in Section 2 is also scalable on multi-task benchmarks such
as MATH (Hendrycks et al., 2021b) and MMLU (Hendrycks et al., 2021a). (Self-refine Instruction-tuning phase
performed using 25% as the training set and omitted in the evaluation phase) (as described in the legend, we use the
notation method(Teacher->Student)).
-----
| [
"Leonardo, Ranaldi",
"Andrè, Freitas"
] | 2024-05-01T00:00:00 | null | false | 1 | 0 | null | http://arxiv.org/abs/2405.00402 | https://arxiv.org/abs/2405.00402 | https://www.semanticscholar.org/paper/5c118e57b5398224ca4401c902cda33da7d29ec3 |
Solving Math Word Problems with Process-based and Outcome-based Feedback | N/A | This work runs the first comprehensive comparison between process-and outcome-based approaches trained on a natural language task, GSM8K, and finds that pure outcome-based supervision produces similar final-answer error rates with less label supervision. | null | [
"Jonathan, Uesato",
"Nate, Kushman",
"Ramana, Kumar",
"Francis, Song",
"Noah, Siegel",
"Lisa, Wang",
"Antonia, Creswell",
"Geoffery, Irving",
"Irina, Higgins"
] | 2022-01-01T00:00:00 | NeurIPS 2022 MATH-AI Workshop | false | 1 | 0 | null | https://www.semanticscholar.org/paper/e002bb8dae5a18a5ea1e7e1aafa16e19ad545662 | null | https://www.semanticscholar.org/paper/e002bb8dae5a18a5ea1e7e1aafa16e19ad545662 |
Stepwise Verification and Remediation of Student Reasoning Errors with Large Language Model Tutors | Large language models (LLMs) present an opportunity to scale high-quality personalized education to all. A promising approach towards this means is to build dialog tutoring models that scaffold students' problem-solving. However, even though existing LLMs perform well in solving reasoning questions, they struggle to precisely detect student's errors and tailor their feedback to these errors. Inspired by real-world teaching practice where teachers identify student errors and customize their response based on them, we focus on verifying student solutions and show how grounding to such verification improves the overall quality of tutor response generation. We collect a dataset of 1K stepwise math reasoning chains with the first error step annotated by teachers. We show empirically that finding the mistake in a student solution is challenging for current models. We propose and evaluate several verifiers for detecting these errors. Using both automatic and human evaluation we show that the student solution verifiers steer the generation model towards highly targeted responses to student errors which are more often correct with less hallucinations compared to existing baselines. | This work focuses on verifying student solutions and shows how grounding to such verification improves the overall quality of tutor response generation and proposes and evaluates several verifiers for detecting student's errors. | [
"Nico, Daheim",
"Jakub, Macina",
"Iryna, Gurevych",
"Manu, Kapur",
"Mrinmaya, Sachan"
] | 2024-07-12T00:00:00 | null | false | 1 | 0 | null | https://arxiv.org/abs/2407.09136v1 | https://arxiv.org/abs/2407.09136 | https://www.semanticscholar.org/paper/696bc486d84cb33a152491676b6237e0e5e43061 |
|
Strategic Chain-of-Thought: Guiding Accurate Reasoning in LLMs through Strategy Elicitation | The Chain-of-Thought (CoT) paradigm has emerged as a critical approach for enhancing the reasoning capabilities of large language models (LLMs). However, despite their widespread adoption and success, CoT methods often exhibit instability due to their inability to consistently ensure the quality of generated reasoning paths, leading to sub-optimal reasoning performance. To address this challenge, we propose the \textbf{Strategic Chain-of-Thought} (SCoT), a novel methodology designed to refine LLM performance by integrating strategic knowledge prior to generating intermediate reasoning steps. SCoT employs a two-stage approach within a single prompt: first eliciting an effective problem-solving strategy, which is then used to guide the generation of high-quality CoT paths and final answers. Our experiments across eight challenging reasoning datasets demonstrate significant improvements, including a 21.05\% increase on the GSM8K dataset and 24.13\% on the Tracking\_Objects dataset, respectively, using the Llama3-8b model. Additionally, we extend the SCoT framework to develop a few-shot method with automatically matched demonstrations, yielding even stronger results. These findings underscore the efficacy of SCoT, highlighting its potential to substantially enhance LLM performance in complex reasoning tasks. | The SCoT framework is extended to develop a few-shot method with automatically matched demonstrations, yielding even stronger results, underscore the efficacy of SCoT, highlighting its potential to substantially enhance LLM performance in complex reasoning tasks. | ## Strategic Chain-of-Thought: Guiding Accurate Reasoning in LLMs through Strategy Elicitation
**Yu Wang[1,2], Shiwan Zhao[3,4], Zhihu Wang[1], Heyuan Huang[1], Ming Fan[2], Yubo Zhang[1],**
**Zhixing Wang[1], Haijun Wang[2], Ting Liu[2]**
1Huawei Technologies Ltd.
2Xi’an Jiaotong University
3Nankai University
4Shanghai Jiao Tong University
**Abstract**
**Vong-based CoT Enhancement**
**e.g. Self-Consistency**
The Chain-of-Thought (CoT) paradigm has emerged as a
critical approach for enhancing the reasoning capabilities
of large language models (LLMs). However, despite their
widespread adoption and success, CoT methods often exhibit instability due to their inability to consistently ensure
the quality of generated reasoning paths, leading to suboptimal reasoning performance. To address this challenge, we
propose the Strategic Chain-of-Thought (SCoT), a novel
methodology designed to refine LLM performance by integrating strategic knowledge prior to generating intermediate
reasoning steps. SCoT employs a two-stage approach within
a single prompt: first eliciting an effective problem-solving
strategy, which is then used to guide the generation of highquality CoT paths and final answers. Our experiments across
eight challenging reasoning datasets demonstrate significant
improvements, including a 21.05% increase on the GSM8K
dataset and 24.13% on the Tracking Objects dataset, respectively, using the Llama3-8b model. Additionally, we extend
the SCoT framework to develop a few-shot method with automatically matched demonstrations, yielding even stronger
results. These findings underscore the efficacy of SCoT, highlighting its potential to substantially enhance LLM performance in complex reasoning tasks.
**Introduction**
The rapid development of large language models (LLMs)
has highlighted their remarkable effectiveness in reasoning tasks (Huang and Chang 2022; Chang et al. 2024),
particularly when integrated with various prompting techniques (Sivarajkumar et al. 2023). These techniques consistently enable impressive performance across diverse domains. Among them, the Chain-of-Thought (CoT) paradigm
has played a pivotal role in enhancing the reasoning capabilities of LLMs (Kojima et al. 2022; Zhang et al. 2022;
Wang et al. 2023). As a result, CoT has become a fundamental component of contemporary LLMs and is now widely
adopted in the field of natural language processing.
Despite the demonstrated effectiveness of the CoT approach in various applications, it faces significant challenges
in complex reasoning tasks. These challenges primarily arise
from the variability in the quality of the reasoning paths generated by the CoT method (Wang et al. 2022), which are not
consistently optimal. Consequently, even when LLMs produce a CoT path that aligns with a valid reasoning process,
|Col1|Answer|
|---|---|
**Low efficiency**
|Input Query|Mul-Query|
|---|---|
**CoT 1** **Vong**
**Language**
**CoT 2**
**Model** **…**
**CoT N**
**RAG–based CoT Enhancement**
**e.g. Step Back & Buffer of Thoughts**
|Col1|Answer|
|---|---|
**External sources**
**required**
**External** **Knowledge** **Language**
**Input Query**
**Source** **Model**
**Strategic Chain-of-Thought**
|Input Query|Col2|
|---|---|
|||
|Single-Query|Col2|
|---|---|
|Single-Query|Answer|
|Step1: Strategy Elicitaon||
**Step2: Answer Generaon**
Figure 1: Comparison of some popular methods with SCoT:
As a single-query method, SCoT is efficient and does not
rely on external knowledge sources, distinguishing it from
other approaches.
there remains a risk that the final outcome may be erroneous.
This phenomenon is analogous to findings in cognitive science, where different problem-solving strategies, although correct, can vary in their likelihood of producing errors. According to Sweller’s Cognitive Load Theory (Sweller 1988), different problem-solving strategies impose varying levels of cognitive load, leading to different
probabilities of error.
This variability in error probability, influenced by the undetermined strategies used to generate CoT paths, can undermine the reliability of the CoT approach in critical applications where precise and reliable reasoning is essential.
Therefore, further refinement and improvement of the CoT
methodology are necessary to enhance its performance in
complex reasoning scenarios, drawing on insights from both
artificial intelligence and cognitive science.
Various methods have been developed to address this
challenge by enhancing the quality of CoT paths in LLMs, as
illustrated in Figure 1. Among these methods, voting-based
approaches enhance reasoning accuracy by generating diverse reasoning paths and then voting on the most reliable
-----
and correct answer (Wang et al. 2022; Zhang et al. 2023).
Retrieval-Augmented Generation (RAG)-based approaches
introduce external sources to access additional knowledge
through multi-step prompting strategies (Lewis et al. 2021;
Yang et al. 2024b; Zheng et al. 2023). These approaches improve the reasoning process by systematically incorporating
and aligning external information before arriving at the final result. Additionally, Suzgun and Kalai (2024) have integrated various prompt enhancement algorithms, dynamically selecting the optimal one to produce the most accurate
results during actual operation.
These approaches do help mitigate the variability in
path quality; however, they often come with significant resource demands. For instance, methods like SelfConsistency (Wang et al. 2022) may require up to 40 queries,
while techniques such as BoT (Yang et al. 2024b) involve
multi-stage queries. Additionally, some approaches may necessitate the integration of external knowledge sources to
achieve optimal performance, which places high demands
on expert resources.
To tackle this challenge, we propose a novel approach
called Strategic Chain-of-Thought (SCoT). SCoT is designed to improve the quality of CoT path generation for
reasoning tasks by incorporating strategic knowledge. The
method involves a two-step process within a single prompt.
First, it explores and identifies various problem-solving
strategies, eliciting the most effective one as the guiding
strategic knowledge. Subsequently, this strategic knowledge
directs the model in generating high-quality CoT paths and
producing accurate final answers, ensuring a more effective reasoning process. We further extend the SCoT framework by adapting it to a few-shot method. In this approach,
strategic knowledge is used to automatically select the most
relevant demonstrations. These examples can be employed
within both the few-shot and SCoT frameworks to further
enhance reasoning capability. SCoT enhances the model’s
reasoning capabilities without the need for multi-query approaches or additional knowledge sources. By eliminating
the requirement for multiple queries and external knowledge integration, SCoT reduces computational overhead and
operational costs, making it a more practical and resourceefficient solution.
The concept of strategic knowledge in our approach is
also inspired by the recent Re-TASK framework (Wang
et al. 2024), which revisits LLM tasks from the perspectives of capability, skill, and knowledge. While Re-TASK
enhances LLM capabilities through knowledge injection and
skill adaptation via capability items, SCoT takes a different
approach by eliciting knowledge rather than relying on explicit knowledge injection. Furthermore, the demonstrations
based on strategic knowledge in SCoT are analogous to the
capability items in Re-TASK.
We conducted experiments across eight reasoning
datasets spanning five distinct domains: mathematical reasoning, commonsense reasoning, physical reasoning, spatial reasoning, and multi-hop reasoning. The results revealed
substantial improvements across various models, including
a 21.05% increase in accuracy on the GSM8K dataset and
a 24.13% increase on the Tracking Objects dataset with the
Llama3-8b model. These results validate the effectiveness of
the SCoT approach.
The contributions of this work are summarized as follows:
- We introduce a two-stage methodology that integrates
strategic knowledge, guiding the LLM to generate highquality CoT paths by first developing a problem-solving
strategy and then producing the final answer.
- We propose a method that leverages strategic knowledge
to select and match relevant demonstrations, enabling the
precise pairing of high-quality CoT examples.
- Our experimental results validate the effectiveness of
SCoT, demonstrating promising outcomes in reasoning
tasks across multiple domains.
**Related Work**
**Strategic Diversity in Problem Solving**
In the realm of problem-solving, there is rarely a one-sizefits-all approach. The complexity of each problem often necessitate a variety of strategies to reach an effective solution.
In the fields of education and cognitive science, the phenomenon of using multiple approaches to solve problems
is quite common (Sweller 1988; Rusczyk 2003). Similarly,
researchers have found that LLMs might generate diverse
solution paths for one question, where the problem-solving
strategies and answers of these methods might vary significantly (Wang and Zhou 2024; Wang et al. 2022).
**Enhancement of CoT Path**
Current methods for enhancing the quality of modelgenerated content are diverse and sophisticated.
Some approaches utilize a voting-based mechanism. For
example, Wang et al. (2022) introduced the Self-Consistency
method, which improves reasoning accuracy by first generating more than 20 CoT paths and then voting for the
most consistent answer. Other methods incorporate external sources. Zheng et al. (2023) introduced Step Back,
which prompts models to generate an abstract of the question to capture deeper logical structures, thereby enhancing retrieval-augmented generation (RAG) capabilities. Similarly, Yang et al. (2024b) developed another RAG-based
method, Buffer of Thoughts, which uses knowledge extracted from external sources and predefined knowledge categories for each task. These elements are integrated into
a predefined task prompt template, enabling the model to
generate more accurate answers. Additionally, some methods introduce external tools to aid problem-solving. Gao
et al. (2023) proposed PAL, which leverages large language
models to parse problems and generate programs as intermediate reasoning steps, delegating the solution to a runtime
environment like a Python interpreter. This neural-symbolic
collaboration has demonstrated improved accuracy across
various tasks. Suzgun and Kalai (2024) introduced metaprompting, which integrates existing prompt-based frameworks, enabling dynamic selection of the most effective reasoning strategy. These strategies, with their complex templates and multi-stage prompting, provide models with sophisticated tools for advancing CoT generation in LLMs.
-----
These methods are inherently complex, with some being task-sensitive and others involving multi-turn prompting; however, they have demonstrated substantial efficacy in
enhancing the reasoning capabilities of LLMs, thereby advancing the frontiers of CoT generation in machine learning.
**Method**
In this section, we introduce the strategic knowledge, the
Strategic Chain-of-Thought (SCoT) method, and its extension through the few-shot approach.
**Strategic Knowledge**
LLMs tend to produce varied CoT paths for the same problem. However, the quality of these CoT paths can vary significantly (Wang and Zhou 2024; Wang et al. 2022). As
shown in the left part of Figure 2(a), when solving the
math question ”compute the sum of all integers s such that
_−26 < s < 24”, one possible approach utilizes term pairing_
and summing the pairs to generate the final answer. Another
possible approach employs the arithmetic series sum formula to compute the final result directly. While both methods are valid for problem-solving, the first approach results
in less stable outputs typically due to the complexity of the
intermediate steps. In contrast, the second approach, which
applies the arithmetic series formula, generally results in
better quality and more stable model outputs. The arithmetic
series formula is considered strategic knowledge.
Strategic knowledge (Strategy) refers to a well-defined
method or principle that guides reasoning towards a correct
and stable solution. It involves using structured processes
that logically lead to the desired outcome, thereby enhancing the stability of CoT generation and improving the overall
quality of the results.
Specifically, strategic knowledge should adhere to the following principles:
1. Correct and Comprehensive Problem-Solving Approach: It provides a systematic approach that allows the
model to generate accurate answers when it follows the reasoning steps carefully.
2. Relatively Straightforward Problem-Solving Steps: The
steps of the method should not be overly complex, while
each step should be sufficiently detailed to ensure accuracy
and prevent overly brief outputs that could lead to ambiguity.
**Strategic Chain-of-Thought**
Building on the concept of strategic knowledge, we propose
a prompt-based method to enhance the reasoning quality of
LLMs, called Strategic Chain-of-Thought (SCoT).
The SCoT method enables the model to first elicit strategic knowledge before generating an answer, rather than producing an answer directly. Specifically, in a single-query setting, SCoT involves two key steps:
1. Elicitation of Strategic Knowledge: The model identifies and determines one of the most effective and efficient
methods for solving the problem, which then serves as the
strategic knowledge for the task.
2. Application of Strategic Knowledge: The model subsequently applies the identified strategic knowledge to solve
the problem and derive the final answer.
Figure 3(a) illustrates a prompt template utilizing the
SCoT approach. Our prompt consists of five components:
Role, Workflow, Rule, Initialization, and Task Input. The
prompt incorporates a structured workflow comprising three
steps integrated into a single prompt. The first two steps are
designed to identify and elicit strategic knowledge for solving the problem, while the third step focuses on applying the
strategy to generate the answer, as shown in Figure 4.
We demonstrate that the rules for strategic knowledge
identification vary across different domains. In mathematics,
strategic knowledge favors generating elegant and efficient
solutions, such as using the arithmetic series formula to sum
sequences. In physics, it involves selecting the most relevant
and straightforward formulas or processes, such as applying F = ma to calculate force. For multi-hop reasoning,
strategic knowledge focuses on determining the appropriate
granularity for problem decomposition and recalling pertinent information. Similarly, in other domains, the model first
develops an overarching method or workflow before systematically applying it to solve problems, such as optimizing
complex systems through algorithms and heuristics.
**Few-shot Strategic Chain-of-Thought**
We refine the SCoT method into a few-shot version by leveraging the strategy to select demonstrations. Our approach
is structured into two stages: constructing a strategy-based
demonstration corpus and performing model inference.
**Stage 1: Strategic Knowledge-Based Demonstration Cor-**
pus Construction.
This stage involves the following two steps, as shown in
Figure 2(b):
1. SCoT Answer Generation: We apply the zero-shot
SCoT method to the training set to generate a corresponding SCoT answer for each question in the dataset.
2. Demonstration Corpus Construction: The generated answers are compared with the ground truth. Only those accurate question-SCoT answer pairs are retained. This step assumes that the strategic knowledge used in these problems
is both correct and relevant. The validated question-SCoT
answer pairs are then compiled into a demonstration corpus
based on strategic knowledge.
**Stage 2: Model Inference.**
This stage involves the following three steps in a twoquery process, as shown in the right of Figure 2(a):
1. Strategic Knowledge Generation: The LLM generates
strategic knowledge relative to the problem, focusing on understanding the problem rather than producing the final answer.
2. Demonstration Matching: The generated strategic
knowledge is used to search the demonstration corpus created in Stage 1. The system identifies and matches the most
relevant demonstrations with the SCoT answers from the
most similar examples.
3. Few-shot Inference: The selected demonstrations are
integrated as few-shot examples into the input prompt (Figure 3(b)). This integration guides the model to generate the
final prediction based on the provided examples.
-----
**CoT** **Zero-shot SCoT** **Step1:**
Pairing terms and adding the The Arithmec sequence sum
pairs: formula can be the problem
The sequence length is 49, The formula for the sum S of an
which is odd. Hence, there are arithmec series is given by:
24 pairs that sum to -2, the S = num_of_term /2 * (first term
24 * (-2) = -48 S = 49/2 * (-25 + 23) = -49
Add the middle number: Thus, the sum of all integers s
-48 + 0 = -48 such that -26 < s < 24 is: -49
**Queson**
compute the sum of all
integers s such that -26 < s < 24?
**CoT**
**CoT Answer**
The integers in the range are: 25, -24, -23, …, 22, 23
Pairing terms and adding the
pairs:
- The sum of -25 + 23 is -2,
- The sum of -24 + 22 is -2,
- Connue this process unl the
middle of the sequence.
The sequence length is 49,
which is odd. Hence, there are
24 pairs that sum to -2, the
middle number is 0.
Calculate the sum of the pairs:
24 * (-2) = -48
Add the middle number:
-48 + 0 = -48
The sum is: -48
**Queson** **Strategy** **Strategy**
compute the sum of all **Elicitaon** To solve this problem, we
integers s such that -26 < s < 24? can use Arithmec
sequence sum formula.
**Zero-shot SCoT** **Step1:**
**Strategy Elicitaon**
**Step2:**
**Answer Generaon** **Demonstraon**
**SCoT Answer** **Corpus**
The Arithmec sequence sum
formula can be the problemsolving strategy for this queson . Demonstraon Selected
The integers in the range are: -25, **Queson A** **Queson B** …
-24, -23, …, 22, 23 **SCoT Answer A** **SCoT Answer B**
The formula for the sum S of an
arithmec series is given by:
S = num_of_term /2 * (first term
+ last term) **Few-shot SCoT**
Apply the sum formula:
S = 49/2 * (-25 + 23) = -49
Thus, the sum of all integers s
such that -26 < s < 24 is: -49
(a) Framework of Zero-shot and Few-shot Strategic Chain-of-Thought. The solid line in the middle
represents Zero-shot SCoT, while the dashed line on the right represents Few-shot SCoT.
|Col1|Queson 2|
|---|---|
|Queson 1|Queson 2|
|---|---|
|Demo Corpus|Col2|
|---|---|
|Training Set Queson 3 Queson 2 Queson 1 There are 18 apples for 28 students. How many students can have exclusive access to an apples if each student needs one and no more than 2 can share? Zero-shot SC SCoT Answer 3 SCoT Answer 2 SCoT Answer 1 We can seng up an equaon… Thus, the answer is 10 Demonstraon Corpus Queson 1 Queson 2 SCoT Answer 1 SCoT Answer 2||
(b) Construction of Demonstration Corpus
Figure 2: Illustration of Zero-shot and Few-shot Strategic SCoT. Few-shot SCoT builds upon Zero-shot SCoT by incorporating
selected demonstrations. Details of the Few-shot SCoT approach are omitted due to space limitations.
and multi-hop reasoning, and spatial reasoning:
1. Mathematics and Physical Reasoning: We assess the
models using datasets such as MathQA (Amini et al. 2019),
AQuA (Ling et al. 2017), GSM8K (Cobbe et al. 2021), and
MMLU-high-school-math (Hendrycks et al. 2021) for mathematical reasoning tasks. These datasets feature a range of
mathematical problems with varying levels of difficulty, demanding strong mathematical reasoning abilities. Additionally, we evaluated the models on ARC Challenge (Clark
et al. 2018) for physical reasoning, i.e., a popular dataset
that presents significant challenges in this domain.
2. Commonsense and Multi-hop Reasoning: We evaluate the models on CommonsenseQA (CSQA) (Talmor et al.
2019) for commonsense reasoning tasks and StrategyQA
(SQA) (Geva et al. 2021) for multi-hop reasoning tasks.
These datasets are well-regarded in their respective domains
and offer a substantial level of difficulty.
3. Spatial Reasoning: We also evaluate the models using the Tracking Object (Object) (BIG-bench authors 2023)
dataset, which represents a less common but highly intriguing type of reasoning task.
In the few-shot version of SCoT, we conduct experiments
exclusively on the MathQA, AQuA, GSM8K, and ARC
datasets. This selection is due to the requirement that the
dataset must have a sufficiently large training set with gold
answers for constructing the demonstration corpus in the
first step. Only these four datasets meet this criterion.
**Models**
To verify the effectiveness of the SCoT method, we utilize the following LLMs: the Llama3 series (Dubey et al.
2024) (including Llama3-8B, Llama3-70B, Llama3.1-8B,
and Llama3.1-70B); the Llama2 series (Touvron et al.
2023) (including Llama2-7B, Llama2-13B, and Llama270B); Mistral-7B (Jiang et al. 2023); the Qwen2 series (Yang
**# Role Seng**
**## Workflow**
**## Demonstraon**
**## Rules**
**## Inializaon**
**Task Input**
(b) Few-shot SCoT
**# Role Seng**
**## Workflow**
**## Rules**
**## Inializaon**
**Task Input**
(a) SCoT
Figure 3: Prompt templates for zero-shot and few-shot SCoT
|Col1|## Workflow|
|---|---|
|1. Search for all valid Problem- solving methods.|1. Analyze the problem and idenfy any relevant mathemacal formulas, or approaches that might be helpful, and select the approaches that can solve the problem. 2. Select the most efficient and praccal approach. For example, when asked to find the sum of all integers from -25 to 23, consider using the summaon formula of arithmec instead of simply adding the numbers one by one. The summaon formula of arithmec sequence is an elegant and praccal soluon, while rudely adding the numbers is not. 3. Solve the problem step by step following the selected approach carefully.|
|||
|2. Select one as strategic Knowledge.||
|||
|3. Use Strategic Knowledge to complete the task.||
Figure 4: Example of a Workflow in a Math Task Prompt
**Experimental Setup**
In this section, we introduce the detailed experimental setup
for validation of SCoT, including the datasets used for testing, the models covered, and the baselines employed.
**Datasets and Tasks**
To validate the effectiveness of the SCoT method, we collect
a range of reasoning-related datasets covering domains including mathematics and physical reasoning, commonsense
-----
et al. 2024a) (including Qwen2-7B and Qwen2-72B); and
ChatGLM4-9B (Team GLM et al. 2024). ChatGLM4-9B is
chat-oriented and other models are instruction-tuned.
**Baselines**
We use zero-shot prompts (Kojima et al. 2022), SelfConsistency (Wang et al. 2022) and Step Back (Zheng et al.
2023) as baselines. We only conducted experiments on 5
datasets using Step Back because Step Back is not wellsuited for other datasets. BoT (Yang et al. 2024b) is not chosen because its template has not been available, making it
impossible to reproduce.
We select the accuracy as the metric for the performance,
which is calculated by the average results of three independent inferences on each model. The experimental parameter
settings are provided in the appendix.
**Experimental Results**
In this section, we empirically evaluate the effectiveness of
the Strategic Chain-of-Thought (SCoT) approach. To verify SCoT’s efficacy across all datasets, we test it using two
open-source models, Llama3-8B and Mistral-7B. To further
validate SCoT’s effectiveness across different models, we
select one dataset from each of the three reasoning task categories and conduct tests on all 7 models. We also examine
the impact of model size, perform ablation studies on SCoT
components, conduct case studies, and analyze experimental
efficiency to understand the factors influencing the effectiveness of SCoT.
**Results across all Datasets**
The experimental results across all datasets using two models are presented in Table 1. Notably, in zero-shot settings,
SCoT outperforms the CoT approach in most tasks, with particularly significant improvements observed on the GSM8K
dataset, where accuracy increases from 52.11% to 73.16%
after incorporating strategic knowledge. Additionally, SCoT
achieves a 24.13% improvement on the Tracking Object
dataset. However, the Llama3-8B model exhibits a 2.6% decrease in performance on the ARC dataset. In general, the
Llama3-8B model shows an average improvement of 6.92%
on all datasets, while the Mistral-7B model demonstrates
an average improvement of 3.81% on comparable datasets.
Compared to Step Back and Self-Consistency, SCoT also
performs better than these two methods except for the result of Self-Consistency with Llama3-8B model on the ARC
dataset. Nevertheless, our SCoT still achieves comparable
results to it. Notably, SCoT shows substantial gains in commonsense reasoning tasks compared with other methods.
Furthermore, we extend the SCoT framework to support few-shot settings by automatically matching demonstrations, resulting in even stronger performance. The
SCoT 1-shot[−], as shown in Table 1, refers to CoT prompting with demonstrations matched through strategic knowledge. Compared to CoT 0-shot[1], SCoT 1-shot[−], which
uses strategy-matched demonstrations, shows significant
1We do not present the accuracy of CoT 1-shot separately as it
was comparable to CoT 0-shot in our experiments.
performance improvements across most datasets, highlighting the effectiveness of the matched demonstrations. The
SCoT 1-shot, which combines both strategic knowledge and
strategy-matched demonstrations, achieves the best results
overall.
**Results across all Models**
The experimental results for all models on the three datasets
are shown in Table 2. The experiments demonstrate that
SCoT can enhance performance across most models. In particular, with the exception of the Llama3.1-8B model, where
the addition of SCoT results in a slight decrease in accuracy on the MMLU task, other models exhibit accuracy improvements ranging from 1.11% to 24.13% across the three
datasets. Note that the CoT 0-shot has achieved 100% accuracy with Llama3.1-70B model on Tracking Object dataset,
and SCoT 0-shot maintains this performance.
MathQA MMLU CSQA
70 70 70
60 60 60
50 50 50
40 40 40
Accuracy(%)
30 30 30
20 20 20
7 13 70 7 13 70 7 13 70
Model Scale (#parameters in billions)
CoT 0-shot SCoT 0-shot
Figure 5: Accuracy(%) across three datasets using different
scales of models in Llama2 series
**Model Scale**
Here we investigate the impact of model size on the effectiveness of SCoT. Experiments on the Llama2 model series
with three different sizes are conducted, and the results are
shown in Figure 5. It demonstrates that SCoT can lead to
accuracy improvements across all sizes of the Llama2 models. However, a general trend emerges that performance improvement decreases marginally with model size. Furthermore, manual inspection of the model outputs reveals that
larger models are more likely to generate CoT path containing strategic knowledge in 0-shot settings.
**Ablation Study**
We explore the effects of various components within the
prompt (such as role, workflow, structure, and the quantity
of demonstrations) on accuracy. The experimental results
are illustrated in Table 3. Building on the CoT 0-shot approach, we observed that adding roles, incorporating workflows, and formatting prompts in markdown progressively
increased accuracy. We also explored the impact of the number of demonstrations on accuracy within the few-shot SCoT
framework. Experimental results indicate that as the number
of demonstrations increases, the performance of SCoT either
slightly improves or remains unchanged.
-----
Model Method MathQA AQuA GSM8K MMLU ARC SQA CSQA Object
CoT 0-shot 56.33 49.61 52.11 46.67 80.60 64.60 71.13 44.27
Self-Con **57.00** **51.90** 48.48 **49.52** **81.00** 66.00 72.06 54.00
Step Back 56.33 50.39 – 47.78 75.80 64.64 – –
SCoT 0-shot **56.67** **51.85** **73.16** **50.00** 78.02 **68.56** **74.00** **68.40**
SCoT 1-shot[−] 56.33 50.87 74.91 – 73.40 – – –
SCoT 1-shot **57.67** **55.12** **76.57** – **80.60** – – –
CoT 0-shot 30.00 29.13 36.26 29.75 67.20 56.22 61.80 21.40
Self-Con **31.42** 32.87 34.50 31.88 **68.78** 53.50 62.69 **24.50**
Step Back 31.43 32.87 – 31.85 68.00 56.72 – –
SCoT 0-shot **30.44** **33.60** **38.97** **32.35** **72.20** **61.89** **68.00** **24.75**
SCoT 1-shot[−] 34.33 31.50 45.57 – 67.40 – – –
SCoT 1-shot **37.00** **35.04** **47.38** – **73.20** – – –
Llama3-8B
Mistral-7B
Table 1: Accuracy (%) using Llama2-8B and Mistral-7B across all datasets. SCoT 1-shot[−] refers to the results obtained using
the standard few-shot CoT template but with demonstrations matched by strategy.
Dataset Method Llama3-8B Mistral-7b Chatglm4-9B Qwen2-7B Qwen2-70B Llama3.1-8B Llama3.1-70B
MMLU CoT 0-shot 46.67 29.75 66.67 **71.97** 84.20 **59.63** **85.19**
Math SCoT 0-shot **50.00+3.33** **32.35+2.59** **68.15+1.48** **71.85** **85.93+1.73** 56.42 **85.19**
CoT 0-shot 64.60 56.22 61.80 **61.00** 75.22 73.11 64.67
SQA
SCoT 0-shot **68.56+3.96** **61.89+5.67** **64.67+2.87** **61.00** **77.67+2.45** **74.22+1.11** **82.33+1.33**
CoT 0-shot 44.27 21.40 61.80 46.20 93.93 62.60 **100.00**
Object
SCoT 0-shot **68.40+24.13** **24.67+3.27** **69.00+7.20** **47.53+1.33** **97.47+3.54** **77.60+15.00** **100.00**
Table 2: Accuracy(%) across seven models on MMLU, SQA and Tracking Object datasets
|Method|AQuA ARC|
|---|---|
|Mistral-7B* Mistral-7B + Role* Mistral-7B + Role Mistral-7B + WorkFlow* Mistral-7B + WorkFlow|29.13% 67.20% 27.95% 69.80% 32.28% 71.20% 33.07% 70.40% 31.89% 70.40%|
|SCoT 0-shot (Ours) SCoT 1-shot (Ours) SCoT 3-shot (Ours)|33.60% 72.20% 35.04% 73.20% 35.43% 73.20%|
Table 3: Ablation study on SCoT prompt components: * denotes a non-markdown format, while no * indicates a markdown format.
**Case Study**
We conduct a detailed case study focusing on the validity
of the strategic knowledge elicited from the model. Figure 6
shows several typical cases.
In the domain of mathematics, we observe that the SCoT
output tends to favor solving problems using inequalities
rather than directly analyzing the problem to reach an answer. For the instance of frog jumping calculation in the Figure 6, an incorrect solution may miscalculate the final jump’s
impact. While generating a strategy ensures accurate calculations by considering all constraints and systematically
solving the problem.
In the field of physics, we find that the model’s CoT output could be misled by specific phrases in the task input (e.g.,
”capacitor”), leading to the selection of an incorrect formula.
In contrast, the SCoT approach successfully elicited the correct formula. Similarly, in multi-hop reasoning tasks, CoT
output often focuses on details, resulting in incomplete subsequent logical reasoning, whereas SCoT generates answers
by considering the overall context.
**Efficiency Analysis**
|Dataset|Method|Llama3-8B|Mistral-7B|
|---|---|---|---|
|AQuA|CoT 0-shot|361.384|270.260|
||SCoT 0-shot|370.378|458.413|
|GSM8K|CoT 0-shot|130.532|858.507|
||SCoT 0-shot|206.278|611.848|
|Object|CoT 0-shot|121.460|89.654|
||SCoT 0-shot|174.888|162.822|
Table 4: Token length comparison for SCoT and CoT 0-shot
methods
Due to SCoT’s mechanism of generating strategy before
solving problems in one query, it is more efficient than
multi-query methods. However, compared to single-query
-----
|Math Queson A|A well is 30 meters deep. A frog at the boom jumps 3 meters up each day but slips back 2 meters each night. How many days will it take for the frog to climb out of the well?|
|---|---|
|CoT Answer|To solve this problem, we need to consider the frog's daily progress. On the first day, the frog jumps up 3 meters but slips back 2 meters, resulng in a net gain of 1 meter. From the second day onward, the frog gains 1 meter each day. The frog needs to climb a total of 30 meters to get out of the well. Aer the first day, it sll needs to climb 29 meters. Therefore, the total number of days required is 1+29=30 days.|
|SCoT Answer|To address this problem elegantly, we can ulize a mathemacal model. The frog gains a net height of 1 meter each day. However, on the day it reaches the top of the well, it will no longer slip back down. Let n represent the number of days it takes for the frog to climb out of the well. We canformulate the following inequality: n - 1 + 3 >= 30 This simplifies to: n >= 28. Thus, the total number of days for the frog to climb out of the well is28 days.|
|Physical Queson B|How much work is required to charge a 10μF capacitor to a potenal difference of 100V?|
|CoT Answer|Using the formula for work done by a capacitor, which is W=k×V×C, and substung the values, we find that 0.09J of work needs to be done.|
|SCoT Answer|Using the capacitor charging formula. The formula for capacitor charging isE = (1/2)CV^2 To substute the values, it needs to do 0.05J of work|
|Mul-hop Queson C|Is shrimp scampi definitely free of plasc?|
|CoT Answer|Shrimp scampi is a dish made with shrimp, buer, garlic, and somemes lemon, usually served with pasta. The ingredients typically include fresh or frozen shrimp, buer, garlic, and lemon juice. There is no connecon between shrimp scampi and plasc, as plasc is not an ingredient in this dish. Therefore, shrimp scampi is True definitely free of plasc.|
|SCoT Answer|Shrimp scampi usually does not contain plasc, but contaminaon is possible. It is typically made with ingredients like shrimp, garlic, buer, and parsley, which do not contain plasc, and there are no known instances of shrimp scampi being made with plasc-containing ingredients. Therefore, while shrimp scampi is usually made without plasc. So, the answer is: No.|
Figure 6: Comparison of the paths generated by CoT and SCoT in different domains.
methods, the output token length might be longer, potentially decreasing efficiency. To investigate this, we measure
the output token lengths for the AQuA, GSM8K, and Tracking Object datasets using both CoT 0-shot and SCoT 0-shot
methods. The results are shown in Table 4.
The results indicate that the token length output by the
Mistral-7B model on the GSM8K dataset decreases with the
SCoT method. This reduction may be due to the model’s
tendency to repetitively generate a specific answer span up
to the inference length limit on the GSM8K dataset in CoT
0-shot, leading to a decline in accuracy. SCoT mitigates this
issue. Besides, the length of SCoT varies from 1.03 to 1.8
times that of CoT, averaging around 1.5 times. This shows
that while our method is somewhat slower than CoT, the efficiency remains manageable.
**Discussions**
**Automatic SCoT**
To demonstrate that our experimental results are not influenced by human-crafted prompts but rather due to the concept of SCoT, we conduct a preliminary test to evaluate
whether the SCoT prompt templates can be automatically
generated. We provide the SCoT concept to Qwen2-72B
to generate the corresponding prompt templates and tested
these on the AQuA dataset. The results are presented in
Table 5. The findings indicate that while the accuracy of
prompts automatically generated based on the SCoT concept is lower than that of manually crafted SCoT prompts,
it is still superior to 0-shot CoT performance. This suggests
that the automatic generation of SCoT-based prompt templates is feasible.
Method Accuracy
CoT 0-shot 29.13
SCoT 0-shot 33.60
**Auto SCoT** **31.89**
Table 5: Accuracy(%) using automatically generated
prompts by LLMs based on the SCoT concept
**Conclusion**
In this paper, we introduce the Strategic Chain-of-Thought,
a method that enables LLMs to autonomously generate an
optimal Chain-of-Thought path. By integrating a structured
workflow for eliciting and applying strategic knowledge,
SCoT enhances the model’s ability to produce a high quality
outputs. We further extend SCoT to a few-shot version by
matching demonstrations through strategic knowledge from
a predefined strategic knowledge-based corpus. Experimental results demonstrate the effectiveness of both 0-shot SCoT
and few-shot SCoT.
Overall, SCoT offers a promising framework for improving the quality of reasoning path in LLMs. Future research
will focus on evaluating its effectiveness with more complex
problems and exploring further applications.
-----
**References**
Amini, A.; Gabriel, S.; Lin, P.; Koncel-Kedziorski, R.; Choi,
Y.; and Hajishirzi, H. 2019. MathQA: Towards Interpretable
Math Word Problem Solving with Operation-Based Formalisms. arXiv:1905.13319.
BIG-bench authors. 2023. Beyond the Imitation Game:
Quantifying and extrapolating the capabilities of language
models. Transactions on Machine Learning Research.
Chang, Y.; Wang, X.; Wang, J.; Wu, Y.; Yang, L.; Zhu, K.;
Chen, H.; Yi, X.; Wang, C.; Wang, Y.; et al. 2024. A survey
on evaluation of large language models. ACM Transactions
_on Intelligent Systems and Technology, 15(3): 1–45._
Clark, P.; Cowhey, I.; Etzioni, O.; Khot, T.; Sabharwal, A.;
Schoenick, C.; and Tafjord, O. 2018. Think you have Solved
Question Answering? Try ARC, the AI2 Reasoning Challenge. arXiv:1803.05457.
Cobbe, K.; Kosaraju, V.; Bavarian, M.; Chen, M.; Jun, H.;
Kaiser, L.; Plappert, M.; Tworek, J.; Hilton, J.; Nakano, R.;
Hesse, C.; and Schulman, J. 2021. Training Verifiers to
Solve Math Word Problems. arXiv:2110.14168.
Dubey, A.; Jauhri, A.; Pandey, A.; Kadian, A.; Al-Dahle, A.;
Letman, A.; and et al. 2024. The Llama 3 Herd of Models.
arXiv:2407.21783.
Gao, L.; Madaan, A.; Zhou, S.; Alon, U.; Liu, P.; Yang,
Y.; Callan, J.; and Neubig, G. 2023. Pal: Program-aided
language models. In International Conference on Machine
_Learning, 10764–10799. PMLR._
Geva, M.; Khashabi, D.; Segal, E.; Khot, T.; Roth, D.; and
Berant, J. 2021. Did Aristotle Use a Laptop? A Question
Answering Benchmark with Implicit Reasoning Strategies.
arXiv:2101.02235.
Hendrycks, D.; Burns, C.; Basart, S.; Zou, A.; Mazeika, M.;
Song, D.; and Steinhardt, J. 2021. Measuring Massive Multitask Language Understanding. arXiv:2009.03300.
Huang, J.; and Chang, K. C.-C. 2022. Towards reasoning in large language models: A survey. _arXiv preprint_
_arXiv:2212.10403._
Jiang, A. Q.; Sablayrolles, A.; Mensch, A.; Bamford, C.;
Chaplot, D. S.; de las Casas, D.; and et al. 2023. Mistral
7B. arXiv:2310.06825.
Kojima, T.; Gu, S. S.; Reid, M.; Matsuo, Y.; and Iwasawa,
Y. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:
22199–22213.
Kwon, W.; Li, Z.; Zhuang, S.; Sheng, Y.; Zheng, L.; Yu,
C. H.; Gonzalez, J. E.; Zhang, H.; and Stoica, I. 2023. Efficient Memory Management for Large Language Model
Serving with PagedAttention. In Proceedings of the ACM
_SIGOPS 29th Symposium on Operating Systems Principles._
Lewis, P.; Perez, E.; Piktus, A.; Petroni, F.; Karpukhin,
V.; Goyal, N.; K¨uttler, H.; Lewis, M.; tau Yih, W.;
Rockt¨aschel, T.; Riedel, S.; and Kiela, D. 2021. RetrievalAugmented Generation for Knowledge-Intensive NLP
Tasks. arXiv:2005.11401.
Ling, W.; Yogatama, D.; Dyer, C.; and Blunsom, P.
2017. Program Induction by Rationale Generation :
Learning to Solve and Explain Algebraic Word Problems.
arXiv:1705.04146.
Rusczyk, R. 2003. The Art of Problem Solving. Washington,
D.C.: Mathematical Association of America.
Sivarajkumar, S.; Kelley, M.; Samolyk-Mazzanti, A.;
Visweswaran, S.; and Wang, Y. 2023. An Empirical Evaluation of Prompting Strategies for Large Language Models in Zero-Shot Clinical Natural Language Processing.
arXiv:2309.08008.
Suzgun, M.; and Kalai, A. T. 2024. Meta-prompting: Enhancing language models with task-agnostic scaffolding.
_arXiv preprint arXiv:2401.12954._
Sweller, J. 1988. Cognitive Load During Problem Solving:
Effects on Learning. Cognitive Science, 12(2): 257–285.
Talmor, A.; Herzig, J.; Lourie, N.; and Berant, J. 2019. CommonsenseQA: A Question Answering Challenge Targeting
Commonsense Knowledge. In Burstein, J.; Doran, C.; and
Solorio, T., eds., Proceedings of the 2019 Conference of the
_North American Chapter of the Association for Computa-_
_tional Linguistics: Human Language Technologies, Volume_
_1 (Long and Short Papers), 4149–4158. Minneapolis, Min-_
nesota: Association for Computational Linguistics.
Team GLM; Zeng, A.; Xu, B.; Wang, B.; Zhang, C.;
Yin, D.; and et al. 2024. ChatGLM: A Family of Large
Language Models from GLM-130B to GLM-4 All Tools.
arXiv:2406.12793.
Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.;
Babaei, Y.; and et al. 2023. Llama 2: Open Foundation and
Fine-Tuned Chat Models. arXiv:2307.09288.
Wang, L.; Xu, W.; Lan, Y.; Hu, Z.; Lan, Y.; Lee, R. K.-W.;
and Lim, E.-P. 2023. Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language
models. arXiv preprint arXiv:2305.04091.
Wang, X.; Wei, J.; Schuurmans, D.; Le, Q.; Chi, E.; Narang,
S.; Chowdhery, A.; and Zhou, D. 2022. Self-consistency
improves chain of thought reasoning in language models.
_arXiv preprint arXiv:2203.11171._
Wang, X.; and Zhou, D. 2024. Chain-of-thought reasoning
without prompting. arXiv preprint arXiv:2402.10200.
Wang, Z.; Zhao, S.; Wang, Y.; Huang, H.; Shi, J.; Xie, S.;
Wang, Z.; Zhang, Y.; Li, H.; and Yan, J. 2024. Re-TASK:
Revisiting LLM Tasks from Capability, Skill, and Knowledge Perspectives. arXiv:2408.06904.
Yang, A.; Yang, B.; Hui, B.; Zheng, B.; Yu, B.; Zhou, C.;
and et al. 2024a. Qwen2 Technical Report. arXiv preprint
_arXiv:2407.10671._
Yang, L.; Yu, Z.; Zhang, T.; Cao, S.; Xu, M.; Zhang, W.;
Gonzalez, J. E.; and Cui, B. 2024b. Buffer of Thoughts:
Thought-Augmented Reasoning with Large Language Models. arXiv preprint arXiv:2406.04271.
Zhang, Z.; Zhang, A.; Li, M.; and Smola, A. 2022. Automatic chain of thought prompting in large language models.
_arXiv preprint arXiv:2210.03493._
-----
Zhang, Z.; Zhang, A.; Li, M.; Zhao, H.; Karypis, G.; and
Smola, A. 2023. Multimodal chain-of-thought reasoning in
language models. arXiv preprint arXiv:2302.00923.
Zheng, H. S.; Mishra, S.; Chen, X.; Cheng, H.-T.; Chi, E. H.;
Le, Q. V.; and Zhou, D. 2023. Take a step back: Evoking
reasoning via abstraction in large language models. arXiv
_preprint arXiv:2310.06117._
-----
**Details of Experiments**
**Models Details**
This experiment involves ten models, nine of which are
public (Llama3-8B, Llama2-7B, Mistral-7B, Llama3.1-8B,
Qwen2-7B, ChatGLM4-9B, Llama3-70B, Llama3.1-70B,
Llama2-70B, and Qwen2-72B), while one model is private.
The sources and licenses for all public models are detailed
in Table 6.
**Datasets Details**
This experiment involves eight datasets: MathQA, AQuA,
GSM8K, MMLU, ARC, StrategyQA, CommonsenseQA,
and Tracking Object. All datasets used in this study are publicly available, with their sources and licenses detailed in Table 7.
MathQA, AQuA, MMLU, ARC, StrategyQA, CommonsenseQA, and Tracking Object consist of multiple-choice
questions. To determine correctness, we compare the predicted choice with the gold (correct) choice. For GSM8K,
the answers are numerical text spans; we assess correctness
by checking if the predicted answer exactly matches the gold
answer.
**Other Details**
For all experiments, except those involving SelfConsistency, the temperature is set to 0, and the top p
parameter is set to 1. For Self-Consistency, following the
settings from the original paper (Wang et al. 2022), the
temperature is adjusted to 0.5, and top p is set to 0.5.
We utilize vllm (Kwon et al. 2023) as the inference framework for all deployments.
**Results**
**All Results**
Accuracy is used as the evaluation metric. We conducted
three independent inference runs for all experiments and calculated the average results. However, due to the high computational cost, we performed only a single inference for SelfConsistency. The accuracy and standard deviation results are
presented in Table 8 and Table 9.
**Case Study**
We conducted a detailed case study to assess the validity
of the strategic knowledge elicited from the model. Figures 7 and 8 present several representative cases spanning
math reasoning, physical reasoning, commonsense reasoning, multi-hop reasoning, and spatial reasoning.
**Experimental Prompts**
The prompt for standard zero-shot Chain-of-Thought is
shown in Figure 9. Prompts for zero-shot Strategic Chainof-Thought are displayed in Figure 10 (for math reasoning), Figure 11 (for multi-hop reasoning), Figure 13 (for
physical reasoning), Figure 12 (for commonsense reasoning)
and Figure 14 (for spatial reasoning). Prompts for one-shot
Strategic Chain-of-Thought are shown in Figure 15. Finally,
the prompts for automated Strategic Chain-of-Thought are
shown in Figure 16. The automated SCOT prompts were
generated using LLMs by given the idea of SCoT.
-----
**Models** **Modelsources** **License**
Llama2-7B-chat https://huggingface.co/meta-llama/Llama-2-7b-chat llama2 license
Llama2-13B https://huggingface.co/meta-llama/Llama-2-13b-chat llama2 license
Llama2-70B https://huggingface.co/meta-llama/Llama-2-70b-chat llama2 license
https://www.modelscope.cn/models/FlagAlpha/
Llama3-8B Apache License 2.0
Llama3-Chinese-8B-Instruct/summary
Llama3.1-8B https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct llama3.1 license
Llama3.1-70B https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct llama3.1 license
Mistral-7B https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 Apache License 2.0
Qwen2-7B https://huggingface.co/Qwen/Qwen2-7B-Instruct Apache License 2.0
Qwen2-72B https://huggingface.co/Qwen/Qwen2-72B-Instruct Apache License 2.0
ChatGLM4-9b https://huggingface.co/THUDM/glm-4-9b-chat glm-4-9b License
Table 6: Models, sources and licenses used in this work
**Datasets** **Sources** **Licenses**
MathQA https://huggingface.co/datasets/datafreak/MathQA Apache License 2.0
AQuA https://github.com/google-deepmind/AQuA Apache License 2.0
GSM8K https://huggingface.co/datasets/openai/gsm8k MIT License
MMLU https://huggingface.co/datasets/cais/mmlu MIT License
ARC https://huggingface.co/datasets/allenai/ai2 arc CC-BY-SA-4.0 License
StrategyQA https://huggingface.co/datasets/ChilleD/StrategyQA/viewer/default/test MIT License
CommonsenseQA https://huggingface.co/datasets/tau/commonsense qa MIT License
https://github.com/google/BIG-bench/tree/092b196c1f8f14a54bbc62f24759d43bde46dd3b
Object Tracking Apache License 2.0
/bigbench/benchmark tasks/tracking shuffled objects/three objects
Table 7: Datasets, sources and licenses used in this work
Model Method MathQA AQuA GSM8K MMLU ARC SQA CSQA Object
CoT 0-shot 56.33±0.000 49.61±1.790 52.11±0.129 46.67±0.000 80.60±0.000 64.60±0.646 71.13±0.094 44.27±0.736
Self-Con **57.00** **51.90** 48.48 **49.52** **81.00** 66.00 72.06 54.00
Step Back 56.33±0.272 50.39±0.000 – 47.78±0.000 75.80±0.248 64.64±0.2722 – –
Llama3-8B SCoT 0-shot **56.67±0.000** **51.85±1.299** **73.16±0.163** **50.00±0.000** 78.02±0.000 **68.56±0.566** **74.00±0.000** **68.40±0.000**
SCoT 1-shot[−] 56.33±0.000 50.87±2.140 74.91±0.000 – 73.40±0.000 – – –
SCoT 1-shot **57.67±0.000** **55.12±0.000** **76.57±0.000** – **80.60±0.000** – – –
CoT 0-shot 30.00±0.000 29.13±1.245 36.26±1.854 29.75±0.924 67.20±0.356 56.22±0.314 61.80±0.000 21.40±0.000
Self-Con **31.42** 32.87 34.50 31.88 **68.78** 53.50 62.69 **24.50**
Step Back 31.43±0.000 32.87±0.322 – 31.85±0.495 68.00±0.000 56.72±0.000 – –
Mistral-7B SCoT 0-shot **30.44±0.874** **33.60±1.523** **38.97±0.655** **32.35±1.665** **72.20±0.370** **61.89±0.415** **68.00±0.000** **24.75±0.165**
SCoT 1-shot[−] 34.33±0.000 31.50±0.964 45.57±1.087 – 67.40±0.000 – – –
SCoT 1-shot **37.00±0.000** **35.04±0.000** **47.38±0.107** – **73.20±0.000** – – –
Table 8: Accuracy (%) using Llama2-8B and Mistral-7B across all datasets. SCoT 1-shot[−] refers to the results obtained using
the standard few-shot CoT template but with demonstrations matched by strategy.
-----
Dataset Method Llama3-8B Mistral-7b Chatglm4-9B Qwen2-7B Qwen2-70B Llama3.1-8B Llama3.1-70B
MMLU CoT 0-shot 46.67±0.000 29.75±0.924 66.67±0.302 **71.97±0.349** 84.20±0.349 **59.63±0.000** **85.19±0.605**
Math SCoT 0-shot **50.00±0.000** **32.35±1.665** **68.15±0.907** **71.85±0.302** **85.93±0.302** 56.42±0.175 **85.19±0.000**
CoT 0-shot 64.60±0.595 56.22±0.314 61.80±0.363 **61.00±0.000** 75.22±0.314 73.11±0.314 64.67±0.000
SQA
SCoT 0-shot **68.56±0.566** **61.89±0.415** **64.67±0.408** **61.00±0.157** **77.67±0.272** **74.22±0.157** **82.33±0.157**
CoT 0-shot 44.27±0.736 21.40±0.000 61.80±0.000 46.20±0.000 93.93±0.525 62.60±0.411 **100.00±0.000**
Object
SCoT 0-shot **68.40±0.000** **24.67±0.000** **69.00±0.000** **47.53±0.094** **97.47±0.339** **77.60±0.993** **100.00±0.000**
Table 9: Accuracy(%) across seven models on MMLU, SQA and Tracking Object datasets
|Math Queson A|A well is 30 meters deep. A frog at the boom jumps 3 meters up each day but slips back 2 meters each night. How many days w ill it take for the frog to climb out of the well?|
|---|---|
|CoT Answer|To solve this problem, we need to consider the frog's daily progress. On the first day, the frog jumps up 3 meters but slips back 2 meters, resulng in a net gain of 1 meter. From the second day onward, the frog gains 1 meter each day. The frog needs to climb a total of 30 meters to get out of the well. Aer the first day, it sll needs to climb 29 meters. Therefore, the total number of days required is 1+29=30 days.|
|SCoT Answer|To address this problem elegantly, we can ulize a mathemacal model. The frog gains a net height of 1 meter each day. However, on the day it reaches the top of the well, it will no longer slip back down. Let n represent the number of days it takes for the frog to climb out of the well. We can formulate the following inequality: n - 1 + 3 >= 30 This simplifies to: n >= 28. Thus, the total number of days for the frog to climb out of the well is 28 days.|
|Physical Queson B|How much work is required to charge a 10μF capacitor to a potenal difference of 100V?|
|CoT Answer|Using the formula for work done by a capacitor, which is W=k×V×C, and substung the values, we find that 0.09J of work needs to be done.|
|SCoT Answer|Using the capacitor charging formula. The formula for capacitor charging is E = (1/2)CV^2 To substute the values, it needs to do 0.05J of work|
|Math Queson C|Jordan plays video games for 2 hours every day. He also has a part-me job where he earns $10 an hour. How much money would Jordan earn in one week if he spent his video game me working instead?|
|CoT Answer|To calculate how much money Jordan would earn in one week if he spent his video game me working instead, we need to follow these steps: Step 1: Jordan plays video games for 2 hours every day. There are 7 days in a week, so he would work for 7 - 2 = 5 hours each day. Step 2: Jordan earns 10 an hour. If he works 5 hours a day, his daily earnings would be 5 hours * 10/hour = 50. To find his weekly earnings, we mulply his daily earnings by the number of days in a week: 50/day * 7 days/week = 350/week.|
|SCoT Answer|Calculate the total hours he would have worked in a week by mulplying the number of hours he plays per day by the number of days in a week. Given: 1. Hours Jordan plays video games per day = 2 hours; 2. Days in a week = 7 days Total hours = 2 hours/day ×7 days/week = 14 hours/week Calculate the total amount of money Jordan would have earned in one week if he worked those 14 hours. Total earnings = 14 hours/week ×$10/hour = $140/week|
Figure 7: Comparison of the paths generated by CoT and SCoT in different domains.
-----
|Object Queson A|Alice, Bob, and Claire are playing a game. At the start of the game, they are each holding a ball: Alice has a brown ball, Bo b has a red ball, and Claire has a purple ball. As the game progresses, pairs of players trade balls. First, Bob and Claire swap balls. T hen, Alice and Claire swap balls. Finally, Alice and Bob swap balls. At the end of the game, Claire has the A) brown ball; B) red ball; C) purple ball|
|---|---|
|CoT Answer|Let's break down the sequence of swaps: 1. Bob and Claire swap balls: Bob gets the purple ball, and Claire gets the red ball. 2. Alice and Claire swap balls: Alice gets the red ball (which Claire had just goen), and Claire gets the brown ball. 3. Alice and Bob swap balls: Alice gets the red ball (which she had just given to Claire), and Bob gets the brown ball. So, at the end of the game, the final posions are: Claire has the purple ball.|
|SCoT Answer|Inial State: Alice has a brown ball; Bob has a red ball; Claire has a purple ball Process Exchanges: 1. Bob and Claire swap balls: Alice sll has a brown ball; Bob has a purple ball; Claire has a red ball 2. Alice and Claire swap balls: Alice has a red ball; Bob has a purple ball; Claire has a brown ball 3. Alice and Bob swap balls: Alice has a purple ball; Bob has a red ball; Claire has a brown ball Determine the Answer: The correct answer is that Claire has the brown ball.|
|Math Queson B|Find out which of the following values is the mulple of X, if it is divisible by 9 and 12? A) 36; B) 15; C) 17; D) 5; E) 7|
|CoT Answer|To find the mulple of a number that is divisible by both 9 and 12, we need to find the common mulple of 9 and 12. The least common mulple (LCM) of 9 and 12 can be found using the prime factorizaon method or the LCM table. The prime factorizaon of 9 is 3^2, and the prime factorizaon of 12 is 2^2 * 2 * 3. Since 3^2 is a factor of both 3^2 and 2^2 * 2 * 3, the LCM of 9 and 12 is 3^2 * 2^2 * 2 * 3 = 3^2 * 2^2 * 3 = 3^2 * 9 = 9^2 = 81. Now, let's check which opon is a perfect square of 9. Opon B, 15, is a perfect square of 9, it is the correct answer..|
|SCoT Answer|The problem states that X is divisible by 9 and 12. To find the mulple of X that fits the given condions, we need to find the least common mulple (LCM) of 9 and 12. Choose the most efficient and praccal approach: the LCM of 9 and 12 is 36. Therefore, the correct answer is A) 36.|
|Mul-hop Queson C|Is shrimp scampi definitely free of plasc?|
|CoT Answer|Shrimp scampi is a dish made with shrimp, buer, garlic, and somemes lemon, usually served with pasta. The ingredients typ ically include fresh or frozen shrimp, buer, garlic, and lemon juice. There is no connecon between shrimp scampi and plasc, as plasc is not an ingredient in this dish. Therefore, shrimp scampi is True definitely free of plasc.|
|SCoT Answer|Shrimp scampi usually does not contain plasc, but contaminaon is possible. It is typically made with ingredients like shr imp, garlic, buer, and parsley, which do not contain plasc, and there are no known instances of shrimp scampi being made with plasc- containing ingredients. Therefore, while shrimp scampi is usually made without plasc. So, the answer is: No.|
Figure 8: Comparison of the paths generated by CoT and SCoT in different domains.
|Zero-shot CoT template|I will provide you with a math problem and 5 opons. Please choose the correct opon from the five provided and indicate your answer with [Answer]opon[Answer], such as [Answer]C[Answer]. Please output the answer at the end in strict accordance with the output format. Problem: [Please Put Your Quesons Here] Opons: [Please Put Your Opons Here] Answer: Let's think step by step.|
|---|---|
Figure 9: An example of prompting for standard zero-shot CoT
-----
|Zero-shot SCoT template|# Role A highly skilled mathemacian and algorithm expert. # Workflow 1. Analyze the problem and idenfy any relevant mathemacal formulas, or approaches that might be helpful, and select the approaches that can solve the problem. 2. Choose the most efficient and praccal approach. For example, when asked to find the sum of all integers from -25 to 23, consider using the summaon formula of arithmec sequence instead of simply adding the numbers one by one. The summaon formula of arithmec sequence is an elegant and praccal soluon, while rudely adding the numbers is not. 3. Solve the problem step by step following the selected approach carefully. ## Rules 1. Avoid using brute force methods, as they do not reflect the professionalism. 2. Indicate your answer with [Answer]opon[Answer], such as [Answer]C[Answer]. Please output the answer at the end in strict accordance with the output format. ## Inializaon As <Role>, please follow <Rules> strictly. Your task is to solve the math problem following <Workflow>.I will provide you with a problem and 5 opons. Please choose the correct opon from the five provided. Problem: [Please Put Your Quesons Here] Opons: [Please Put Your Opons Here] Answer: Let's think step by step.|
|---|---|
Figure 10: An example of prompting for standard Strategic Chain-of-Thought in math reasoning tasks
|Zero-shot SCoT template|# Role An expert of world knowledge with strong logical skills. # Workflow 1. Analyze the problem and break down the complex query into simpler sub- quesons. 2. Sequenally finding reliable answers for each sub-queson. 3. Integrang these answers to form a comprehensive. Directly answering the main queson is rude, but breaking it down, answering the sub-quesons, and then integrang the answers is elegant and praccal. ## Rules 1. Avoid using brute force methods, as they do not reflect the professionalism. 2. Indicate your answer with [Answer]opon[Answer], such as [Answer]C[Answer]. Please output the answer at the end in strict accordance with the output format. ## Inializaon As <Role>, please follow <Rules> strictly. Your task is to solve the problem following <Workflow>.I will provide you with a problem and 5 opons. Please choose the correct opon from the five provided. Problem: [Please Put Your Quesons Here] Opons: [Please Put Your Opons Here] Answer: Let's think step by step.|
|---|---|
Figure 11: An example of prompting for standard Strategic Chain-of-Thought in multi-hop reasoning tasks
-----
|Zero-shot SCoT template|# Role An expert with world knowledge and reasoning abilies. # Workflow 1. Understanding the Queson: Idenfy key concepts and comprehend the queson's context. Ensure you grasp the main idea and any analogies being used. Search for any concept, knowledge, or common sense related to the topic. 2. Analyzing the Opons: Read each choice carefully, understand its meaning, and relate it to the queson's context to determine relevance. 3. Logical Reasoning: Use logical reasoning to eliminate opons that are clearly irrelevant or incorrect based on the queson's context. Compare the remaining opons to idenfy the one that best aligns with the queson's requirements and the context provided. ## Rules 1. Avoid using brute force methods, as they do not reflect the professionalism. 2. Indicate your answer with [Answer]opon[Answer], such as [Answer]C[Answer]. Please output the answer at the end in strict accordance with the output format. ## Inializaon As <Role>, please follow <Rules> strictly. Your task is to solve the problem following <Workflow>.I will provide you with a problem and 5 opons. Please choose the correct opon from the five provided. Problem: [Please Put Your Quesons Here] Opons: [Please Put Your Opons Here] Answer: Let's think step by step.|
|---|---|
Figure 12: An example of prompting for standard Strategic Chain-of-Thought in commonsense reasoning tasks
|Zero-shot SCoT template|# Role A careful expert proficient in various world knowledge. # Workflow 1. Careful Queson Analysis: - Read the Problem and the Opons Carefully: Ensure you understand the background and specific queson being asked . - Idenfy Keywords: Extract key terms or phrases from the Problem and the Opons, try recalling their meanings . - Understand the Problem: Ensure you clearly understand what the Problem is asking, including any specific condions or requirements. Eliminate opons that are not relevant to the problem. 2. Idenfy Relevant Knowledge and approaches: - Recall Related Knowledge or approach: Idenfy all the relevant concepts, principles, or formulas that might apply to the Problem. - Select Appropriate Knowledge: Choose the knowledge, formulas and approaches that can solve the problem. 3. Choose the Most Efficient and Praccal Knowledge and Formulas: When solving the problem, select the most efficient and praccal knowledge, formulas or approaches. For example, when the descripon of a problem is related to potenal energy and kinec energy of an object, aer using the formula PE = mgh, carefully analyze each opon to judge right or wrong, rather than relying on experience or ready-made theorems to select opons. 4. Careful Applicaon of Knowledge and Formulas: -Detailed Analysis: When applying formulas and knowledge, pay aenon to the specific condions and variables in the problem. - Logical Reasoning: Carefully analyze each variable in the formula or methodically derive conclusions based on the knowledge point, ensuring the reasoning process is consistent and correct. For example, when using PE = mgh, you need to analyze the overall effect of all variables, including m, g, and h, rather than just one variable. ## Rules 1. Avoid using brute force methods, as they do not reflect the professionalism. 2. Indicate your answer with [Answer]opon[Answer], such as [Answer]C[Answer]. Please output the answer at the end in strict accordance with the output format. ## Inializaon As <Role>, please follow <Rules> strictly. Your task is to solve the problem following <Workflow>.I will provide you with a problem and 5 opons. Please choose the correct opon from the five provided. Problem: [Please Put Your Quesons Here] Opons: [Please Put Your Opons Here] Answer: Let's think step by step.|
|---|---|
Figure 13: An example of prompting for standard Strategic Chain-of-Thought in physical reasoning tasks
-----
|Zero-shot SCoT template|# Role A very meculous logical Analyst. # Workflow 1. Inial State: First, list the inial state of the balls each person has according to the problem statement. 2. Process Exchanges: Next, carefully read the problem statement. For each exchange, update the current state of the balls and document the result of each exchange. 3. Determine the Answer: Once all exchanges are completed, idenfy which friend's ball color is being inquired about in the problem statement and select the correct answer. ## Rules 1. Avoid using brute force methods, as they do not reflect the professionalism. 2. Indicate your answer with [Answer]opon[Answer], such as [Answer]C[Answer]. Please output the answer at the end in strict accordance with the output format. ## Inializaon As <Role>, please follow <Rules> strictly. Your task is to solve the problem following <Workflow>.I will provide you with a problem and 5 opons. Please choose the correct opon from the five provided. Problem: [Please Put Your Quesons Here] Opons: [Please Put Your Opons Here] Answer: Let's think step by step.|
|---|---|
Figure 14: An example of prompting for standard Strategic Chain-of-Thought in spatial reasoning tasks
|One-shot SCoT template|# Role A highly skilled mathemacian and algorithm expert. # Workflow 1. Analyze the problem and idenfy any relevant mathemacal formulas, or approaches that might be helpful, and select the approaches that can solve the problem. 2. Choose the most efficient and praccal approach. For example, when asked to find the sum of all integers from -25 to 23, consider using the summaon formula of arithmec sequence instead of simply adding the numbers one by one. The summaon formula of arithmec sequence is an elegant and praccal soluon, while rudely adding the numbers is not. 3. Solve the problem step by step following the selected approach carefully. ## Demonstraons Problem: [Please Put Your Demonstraon Problem Here] Opons: [Please Put Your Demonstraon Opons Here] Answer: [Please Put Your Demonstraon Answer Here] ## Rules 1. Avoid using brute force methods, as they do not reflect the professionalism. 2. Indicate your answer with [Answer]opon[Answer], such as [Answer]C[Answer]. Please output the answer at the end in strict accordance with the output format. ## Inializaon As <Role>, please follow <Rules> strictly. Your task is to solve the math problem following <Workflow>, <Demonstraon> is some examples. I will provide you with a problem and 5 opons. Please choose the correct opon from the five provided. Problem: [Please Put Your Quesons Here] Opons: [Please Put Your Opons Here] Answer: Let's think step by step.|
|---|---|
Figure 15: An example of prompting for one-shot Strategic Chain-of-Thought
-----
|Auto Zero-shot SCoT template|You are tasked with solving a reasoning problem by first idenfying the most effecve strategy before arriving at the final answer. Carefully consider the problem and generate the strategic knowledge that would best guide the problem-solving process. Problem: [Please Put Your Problem Here] Next, use the generated strategic knowledge to work through the problem step by step, showing all necessary reasoning, and arrive at the final soluon. Opons: [Please Put Your Opons Here] Answer: Let's think step by step.|
|---|---|
Figure 16: An example of prompting for automatic Strategic Chain-of-Thought
-----
| [
"Shiwan, Zhao",
"Zhihu, Wang",
"Heyuan, Huang",
"Ming, Fan",
"Yubo, Zhang",
"Zhixing, Wang",
"Haijun, Wang",
"Ting, Liu",
"Yu, Wang"
] | 2024-09-05T00:00:00 | null | false | 1 | 0 | null | http://arxiv.org/abs/2409.03271 | https://arxiv.org/abs/2409.03271 | https://www.semanticscholar.org/paper/1190475124f300d53612d427cd5d243337b2a11c |
Subgoal-based Demonstration Learning for Formal Theorem Proving | Large language models (LLMs) present a promising pathway for advancing the domain of formal theorem proving. In this paper, we aim to improve the performance of LLMs in formal theorem proving by thoroughly examining the structure and organization of demonstrative in-context examples. We introduce a subgoal-based demonstration learning framework, specifically designed to enhance the efficiency of proof search in LLMs. First, drawing upon the insights of subgoal learning from reinforcement learning and robotics, we propose the construction of distinct subgoals for each demonstration example and refine these subgoals in accordance with the pertinent theories of subgoal learning. Second, we build upon recent advances in diffusion models to predict the optimal organization, simultaneously addressing two intricate issues that persist within the domain of demonstration organization: subset selection and order determination. Our integration of subgoal-based learning has notably increased proof accuracy from 38.9% to 44.1% on the miniF2F benchmark. Furthermore, the adoption of diffusion models for demonstration organization can lead to an additional enhancement in accuracy to 45.5%, or a $5\times$ improvement in sampling efficiency compared to previously established methods. | This paper introduces a subgoal-based demonstration learning framework, specifically designed to enhance the efficiency of proof search in LLMs, and builds upon recent advances in diffusion models to predict the optimal organization. | ## Decomposing the Enigma: Subgoal-based Demonstration Learning for Formal Theorem Proving
**Xueliang Zhao**
The University of Hong Kong
[email protected]
**Wenda Li**
University of Cambridge
[email protected]
**Lingpeng Kong**
The University of Hong Kong
[email protected]
**Abstract**
Large language models (LLMs) present an intriguing avenue of exploration in
the domain of formal theorem proving. Nonetheless, the full utilization of these
models, particularly in terms of demonstration formatting and organization, remains
an underexplored area. In an endeavor to enhance the efficacy of LLMs, we
introduce a subgoal-based demonstration learning framework, consisting of two
primary elements: Firstly, drawing upon the insights of subgoal learning from
the domains of reinforcement learning and robotics, we propose the construction
of distinct subgoals for each demonstration example and refine these subgoals in
accordance with the pertinent theories of subgoal learning. Secondly, we build
upon recent advances in diffusion models to predict the optimal organization,
simultaneously addressing two intricate issues that persist within the domain of
demonstration organization: subset selection and order determination. Through
the integration of subgoal-based learning methodologies, we have successfully
increased the prevailing proof accuracy from 38.9% to 44.3% on the miniF2F
benchmark. Furthermore, the adoption of diffusion models for demonstration
organization can lead to an additional enhancement in accuracy to 45.5%, or a
5 improvement in sampling efficiency compared with the long-standing state[of-the-art method. Our code is available at https://github.com/HKUNLP/](https://github.com/HKUNLP/subgoal-theorem-prover)
[subgoal-theorem-prover×](https://github.com/HKUNLP/subgoal-theorem-prover) .
**1** **Introduction**
Mathematical theorem proving constitutes a significant milestone in the pursuit of artificial intelligence. Recently, machine learning methodologies have spurred advancements in both formal and
informal theorem proving domains [36, 22]. Our study falls into the former category. In contrast
to informal theorem proving, formal methods have the advantage of leveraging interactive proof
assistants [33] to automatically validate proofs generated by models, delegating the verification
task to computational systems rather than human intervention. This significantly reduces the costs
associated with proof checking, and has been applied in software verification [20] and research-level
mathematics [5].
Recently, advances in large language models (LLMs) shed new light on the domain of formal theorem
proving. The complexity of automated theorem proving comes from the necessity of searching
through a vast space of possible logical statements and proof methods, in order to determine the
truth-value of a given theorem. LLMs reduce the difficulty of the searching problem by factorizing
the formal proof automation task into two in-context learning (§5.2) problems [46, 15, 8]. Given
Preprint. Under review.
-----
**Statement** **Bernoulli**
**Noise**
**Reverse** **Forward**
**Process** **Process**
**Statement**
**Statement** **Optimal**
**Subset & Order**
**Statement** **Subgoal- based Proof**
Suppose n is a positive natural number
such that 𝑛[!] + 2 −3 ∗𝑛 is a prime Step 1: Show that n > 2.
number. Show that n must be equal to
3. Step 2: Assume n is not greater than 2.
Step 3: Deduce that n = 1 or n = 2.
**Informal Proof**
Step 4: Show that this leads to a contradiction with
Factoring, we get 𝑛[!] −3𝑛+= the prime assumption.
𝑛−2 𝑛−1 . Step 5: Use the inequality n > 2 to find the
Either 𝑛−1 or 𝑛−2 is odd, and the expression for the given polynomial.
other is even. Their product must Step 6: Show that the polynomial is prime.
yield an even number. The only prime that is even is 2, which is when Step 7: Use the prime_product lemma to deduce that either n - 1 = 1 or n - 2 = 1.
𝑛 is 3 or 0. Since 0 is not a positive
number, the answer 3. Step 8: Use the inequality n > 2 to show that n = 3.
(a) Subgoal-based Proof
(b) Demonstration Reorganization
Figure 1: Left: An instance of informal proof and subgoal-based proof. Right: Employing diffusion
models to identify a more effective subset of demonstration examples, as well as the optimal order
for these examples.
a mathematical statement, an LLM first generates its informal proof as a draft. It then generates a
_formal sketch based on this draft, which is ready for an off-the-shelf prover to verify its correctness_
automatically.[1] In both of these steps, the quality of the demonstrative in-examples either written by
humans or generated by machines is the key to the performance of the system.
In this paper, we seek to improve the efficacy of LLMs in formal theorem proving by delving deeper
into the format and the organization of these demonstrative in-context examples. We present a
subgoal-based demonstration learning framework, comprising two main components. First, we
restructure an informal proof into a subgoal-based proof (Figure 1(a)), drawing upon the insights of
subgoal learning from reinforcement learning and robotics, where studies show that breaking down
complex tasks into smaller yet more uniformed subgoals enhances the learning efficiency of the
agents[7, 49]. To construct subgoal-based proofs that can be easily processed and handled by LLMs,
we start with human-written informal proofs and then iteratively refine them through interaction
with ChatGPT [43], guided by the subgoal learning theory (§2.1). Second, a recent study [47]
points out that the selection and the ordering of the in-context examples have a significant impact on
performance. The lengthy formal sketches in automatic theorem proving intensifies this impact, as
we can only present very few cases of demonstrations. In response to that, we train a diffusion model
to organize the demonstrative in-examples for the translation process from subgoal-based proof to its
corresponding formal sketch of each instance (§2.2). This approach identifies a more effective subset
of demonstration examples as well as the most beneficial order of these examples (Figure 1(b)).
The proposed method significantly outperforms competing approaches in formal theorem proving
tasks, achieving a pass rate of 45.5% on miniF2F dataset [52], a 6.6% absolute and 17.0% relative
improvement over the previous state-of-the-art system [15]. Furthermore, the adoption of diffusion
models for demonstration organization can lead to a significant improvement in sampling efficiency,
reaching previous state-of-the-art (38.5%) on miniF2F with only 20 (compared to 100) calls to the
LLM.
**2** **Subgoal-based Demonstration Learning**
Given a theorem statement x, the goal of proof synthesis is to generate a formal sketch y which
can be verified by an off-the-shelf automated theorem prover (e.g., Sledgehammer) [15]. In this
section, we propose the subgoal-based demonstration learning framework which consists of two key
components, subgoal-based proof (§2.1) and demonstration reorganization (§2.2). The subgoal-based
_proof replaces the informal proof, breaking down a complex problem into smaller subgoals that offer_
more fine-grained and uniform guidance to the LLMs. The demonstration reorganization takes place
in the stage of generating the formal sketch based on the subgoal-based proof. This procedure is
non-trivial. Given the limited context length of the LLMs, selecting relevant yet diverse demonstration
examples has a significant impact on the final pass rate of these formal sketches. We denote the set
1In practice, the informal proof often serves as inline comments in the formal sketch to better guide the
generation procedure.
-----
of all N demonstration examples by E _E1,E2,_ _,EN_ . Each of them contains a mathematical
_statement, an informal proof (or a subgoal-based proof_ ), and a formal sketch. In the remainder of
this section, we first describe the iterative refinement process that produces the subgoal-based proofs = { ⋯ }
given the informal proof, guided by the principles in subgoal learning theory [49]. We then explain
our solution to the demonstration reorganization. Starting from collecting arrangements that have
yielded successful proofs, we use these as training data for a diffusion model, which progressively
determines the most favorable reorganization during inference.
**2.1** **Subgoal-based Proof**
The significance of LLMs to formal theorem proving is that they grant us the ability to leverage
informal proofs to guide formal theorem proving, which otherwise has to be based on expensive
heuristics-based brute-force search. Despite considerable progress [22, 29], this approach suffers
from the flawed informal proofs generated by the LLMs [15]. We propose to use subgoal-based
proofs to replace the informal proofs, where the subgoals are strictly aligned with the states in the
automatic provers. Following Zhang et al. [49], we seek to obtain a valid sequence of subgoals which
satisfies the condition that each subgoal in this sequence should be reachable from both the initial
state (i.e., the statement) and the final state (i.e., the passing state of the proof). These valid sequences
integrate the guidance from the LLMs better with the search space of the automatic theorem provers,
thereby leveraging the ability of the LLMs to the maximum extent. However, it is non-trivial to get
these valid subgoal proofs as human-written subgoals often fall short of the above constraints. To
address this problem, we iteratively refine the subgoal proof, in the spirit of self-play in reinforcement
learning [39], making calls to both the LLM and the off-the-shelf automated theorem prover.
**Subgoal Refinement.** We start with manually written subgoal-based proofs, and denote these as
the initial seed set _Ei[(][0][)]_ _i_ 1[. This set contains subgoal-based proofs formed on the informal proofs]
and the statement, yet not guaranteed to be a valid sequence. We denote the sequence of subgoals
=
in an instance as {s0,s1,s}[N]2, _,s∆,s∆_ 1, where ∆ is the total number of subgoals and s0 and s∆ 1
are two special subgoals that align with the initial and final states of the automatic prover. During
+ +
the k-th iteration, we randomly select a subset of instances from the previous iteration ( ⋯ ) _Ei[(][k][−][1][)]_ _i_ 1
as the in-context demonstration for the LLM to generate subgoals for a given instance. According
=
to the definition, si is considered to be a valid subgoal if and only if it can be reached both from { }[N] s0
and s∆ 1. Therefore, for each of the subgoal, we recursively call the proof assistant to verify the
validness of the most recently developed subgoal and only after ∆ recursions we can obtain the new
+
valid sequence of subgoals and adds that into the next iteration as Ei[(][k][)]. This process improves the
consistency of the derived subgoals in style, thus making it easier for the LLM to learn from in the
inference stage. We provide a detailed description of the algorithm in Appendix A.
**2.2** **Demonstration Reorganization**
The demonstration examples can be lengthy in formal theorem proving. If we assume a maximum
context length of 3072 tokens, only 4.79 examples on average can be included. Our experiments
echo the findings by Wu et al. [47]. These instance-based demonstration examples have a significant
impact on performance. Only certain orders of carefully chosen demonstration examples lead to
successful theorem proving. Consequently, identifying the optimal subset from the pool and ordering
them into meaningful in-context demonstration examples is of great significance, which unfortunately
is an NP-complete problem. We form the demonstration reorganization problem as finding the
(Sub)hamiltonian graph where the nodes represent demonstration examples. A traverse following
the path corresponds to the selection and the ordering of the in-context examples. Building upon
the recent success of applying diffusion models in addressing NP-complete problems [10, 42], we
further formulate this problem into a diffusion process on the graph. This solution has two main
advantages. First, it addresses the example selection and ordering problem simultaneously. Second,
the inference can be performed in parallel, which greatly reduces the time of discovering the optimal
arrangement given the demonstration examples. We start from collecting successful pairs of incontext demonstration example organization and the corresponding statement x as the training data
for the diffusion model. We randomly organize (select and order) the demonstration examples and
query the LLM to see if it can generate the proof successfully. The passing cases will be used as the
starting configuration ψ0 in the diffusion process given the statement x.
-----
**Training.** The aim of employing diffusion models is to predict the optimal organization, denoted as
**_ψ0, conditioning on the theorem statement x. From the standpoint of variance inference, diffusion_**
models adopt the following formulations to model pθ **_ψ0_** _x_,
_pθ_ **_ψ0_** _x_ _pθ_ **_ψ(0_** _T_ ∣x )dψ1 _T,_ (1)
where ψ1, _,_ **_ψT serve as latent variables with the same dimensionality as(_** ∣ ) ∶= ∫ ( ∶ ∣ ) ∶ **_ψ0. The learned reverse_**
process progressively denoises these latent variables in order to reconstruct ψ0. This procedure can
be formalized as follows,⋯
_T_
_pθ_ **_ψ0_** _T_ _x_ _p_ **_ψT_** _pθ_ **_ψt_** 1 **_ψt,x_** _._ (2)
_t_ 1
∶ −
( ∣ ) = ( ) ∏= ( ∣ )
The forward process gradually corrupts ψ0 to generate noised latent variables,
_T_
_q_ **_ψ1_** _T_ **_ψ0_** _q_ **_ψt_** **_ψt_** 1 _._ (3)
_t_ 1
∶ −
( ∣ ) = ∏= ( ∣ )
The goal of the training process is to maximize the evidence lower bound (ELBO),
E log pθ **_ψ0_** _x_ Eq log _pθ_ **_ψ0_** _T_ _x_
_qθ_ **_ψ1_** _T_ ∶ψ0,x
( ∣ ) (4)
[ ( ∣ )] ≥ [ ∶
Eq log pθ **_ψ0_** **_ψ1,x_** _DKL_ _q_ **_ψt_** 1 (ψt, **_ψ ∣0_** _pθ_ )ψ[]]t 1 **_ψt,x_** _._
_t_ 1
− −
We employ a Graph Neural Network (GNN) for the encoding and denoising process of the graph.= [ ( ∣ ) −∑> [ ( ∣ )∥ ( ∣ )]]
Following Austin et al. [2], we adopt discrete diffusion models to model binary random variables.
**Inference.** During the inference stage, we obtain samples ψ _pθ_ **_ψ0_** _x_ and subsequently reconstruct the order of demonstration examples from ψ. We then incorporate examples sequentially into
the LLM context, and define the output of the demonstration organization module as the sequence ∼ ( ∣ )
of examples upon reaching the LLM length constraint. More details of the implementation of the
diffusion model, the implementation of GNN, and techniques used in the sampling process of ψ can
be found in Appendix B.
**3** **Experiments**
**3.1** **Formal Environment**
**Interactive Theorem Provers.** Interactive Theorem Provers (ITPs), such as Isabelle [32], constitute
the backbone of contemporary mathematical verification systems. They facilitate the integration of
mathematical definitions and theorems into a consistent logical framework, such as Higher-Order
Logic or Dependent Type Theory, which is operationalized by their kernels. The kernel plays a pivotal
role in the verification process, meticulously examining each theorem to ascertain its recognition by
the ITP and thereby ensuring the integrity of the system. The theorem proving process within an
ITP is characterized by the articulation of the theorem in the ITP’s programming language, followed
by an iterative simplification into more manageable objectives or subgoals. The theorem is deemed
proven once it can be distilled down to pre-established facts. The selection of Isabelle for our paper
is motivated by its intuitive interface, its compatibility with a range of logical frameworks, and its
comprehensive library of formalized mathematics.
**Sledgehammer.** Sledgehammer [34] serves as a powerful tool for automating reasoning within
the interactive theorem prover Isabelle. It functions by transmuting the goals encapsulated in
Isabelle/HOL’s higher-order logic into alternative logics, such as first-order logic. These transmuted
goals are then passed to off-the-shelf automated theorem provers, including E, CVC4, Z3, Vampire,
and SPASS. In the event that any of these automated theorem provers successfully derives the proof
in their respective formats, Sledgehammer undertakes the task of reconstructing the proof within the
Isabelle/HOL framework using certified provers, namely metis, meson, and smt. This reconstructed
proof, being more interpretable to humans, significantly enhances the system’s usability, thereby
contributing to the efficiency and effectiveness of (interactive) theorem proving.
-----
**3.2** **Dataset and Evaluation**
**Dataset.** We evaluate our approach using the miniF2F dataset [52], which comprises 488 formal
mathematical problems derived from high-school competitions, expressed in three formal languages:
Lean, HOL-Light, and Isabelle. The dataset is divided into a validation and a test set, each including
244 problems. The problems within the dataset are sourced from three distinct categories: 260
problems are extracted from the MATH dataset [13], 160 problems are extracted from actual highschool mathematical competitions (AMC, AIME, and IMO), and 68 problems are crafted to mirror
the difficulty level of the aforementioned competitions.
**Evaluation.** The task at hand entails the generation of formal sketches for problems in the miniF2F
dataset. The validity of a formal sketch depends on two criteria: first, the absence of “cheating”
keywords such as “sorry” and “oops” that prematurely terminate a proof prior to its completion;
second, the capacity of the interactive theorem prover Isabelle to authenticate the corresponding
formal statement with the proof. To make working with Isabelle easier, we use the Portal-to-Isabelle
API, as introduced by Jiang et al. [15]. Given the absence of a training split in the miniF2F dataset,
we leverage optimal organizations that yield successful proofs from the miniF2F-valid set to train the
diffusion model. As proposed by Lample et al. [21], we employ the cumulative pass rate as a measure
for the results obtained from performing inference using diffusion models on the miniF2F-valid
set. This involves integrating the pass rates from both the data collection stage for training and the
inference stage. When it comes to other scenarios, namely conducting inference on the miniF2F-test
or cases where the diffusion model is not employed, we simply provide the pass rate.
**3.3** **Baselines**
We use the following baselines to test the effectiveness of our proposed methodology.
**Symbolic Automated Provers.** We first employ Sledgehammer, a proof automation tool that is
extensively utilized within the Isabelle environment. We adhere to the default configuration of
Sledgehammer as provided in Isabelle2021, which encompasses a 120-second timeout and a suite
of five automated theorem provers (Z3, CVC4, SPASS, Vampire, E). In alignment with Jiang et al.
[15], we employ Sledgehammer supplemented with heuristics, integrating 11 prevalent tactics (i.e.,
auto, simp, blast, fastforce, force, eval, presburger, sos, arith, linarith, auto simp: field simps) with
Sledgehammer. If all the tactics fail or take longer than 10 seconds, the system reverts to the base
Sledgehammer.
**Search-based Methods.** In addition to the above, we incorporate baselines that utilize Monte-Carlo
tree search [39] to discover the proof. This includes Thor [16] and another version of Thor that
employs an expert iteration on autoformalized data (i.e., Thor+expert iteration [46]). Thor combines
language models with automatic theorem provers to overcome the challenge of selecting beneficial
premises from a vast library. Thor+expert iteration enhances a neural theorem prover by training it
on theorems that have been automatically formalized.
**LLM-based Method.** Lastly, we incorporate a LLM-based baseline, namely, Draft, Sketch and
_Prove (DSP) [15]. DSP turns informal proofs into formal sketches and leverages these formal sketches_
to steer an automated prover. Notably, we employ the variant of DSP that is implemented with the
540B Minerva model [22], as this particular implementation demonstrated superior performance in
their paper.
We exclude representative methods such as HyperTree Proof Search (HTPS) [21] and GPT-f with expert iteration [37], which are implemented using Lean [6], a different interactive theorem prover. The
disparity in tactics and automation between Lean and Isabelle renders them not directly comparable
to our method.
**3.4** **Implementation Details**
Throughout our work, we employ ChatGPT [2] as the LLM. For the creation of the formal sketch, the
temperature and max_tokens parameters of ChatGPT are set to 0 and 1024, respectively. In terms
2the gpt-3.5-turbo-0301 version
-----
Table 1: Pass rates on the miniF2F dataset with Isabelle. Numbers in bold denote the best performance.
Numbers with a correspond to the cumulative pass rate [21] since the evaluated statements are part
of the training for diffusion models. See §3.2 for more details about cumulative pass rate.
⋆
valid test
Sledgehammer 9.9% 10.4%
Sledgehammer+heuristic 18.0% 20.9%
Thor 28.3% 29.9%
Thor + expert iteration 37.3% 35.2%
DSP (540B Minerva) 42.6% 38.9%
Ours **48.0%[⋆]** **45.5%**
Table 2: Ablation results on the miniF2F dataset with Isabelle. Numbers with a correspond to the
cumulative pass rate.
⋆
valid test
Ours 48.0%[⋆] 45.5%
- subgoal & diffusion 41.8% 38.5%
- subgoal 44.3%[⋆] 40.6%
- diffusion 47.5% 44.3%
of the establishment of the subgoal-based proof, we set the number of refinement iterations to be
15, with the number of demonstration examples, denoted as N, being set to 61. For demonstration
organization, we employ a randomized demonstration organization approach to generate proofs for
116 distinct statements on miniF2F-valid, which yield 137 successful proofs. We then partition the
corresponding demonstration contexts into a training set and a validation set, comprising 81 and
56 instances respectively. The training of our diffusion models is conducted with a learning rate of
5e 4, a batch size of 16, and over a span of 50 epochs. We set the number of diffusion steps T to 80.
We employ an early stopping strategy on the validation set and report the performance averaged over
three different runs. −
**3.5** **Main Results**
The experiment results, as shown in Table 1, yield several key observations: (1) Our proposed method
achieves a pass rate of 48.0% on miniF2F-valid and 45.5% on miniF2F-test, surpassing all competing
methods. This superior performance is attributable to the subgoal-based proof coupled with usage
of diffusion models for demonstration reorganization; (2) The methods Thor and Thor + expert
iteration struggle due to the enormously large action space. This space significantly overshadows
that of games, thereby posing challenges to the comprehensive utilization of the Monte Carlo tree
search. Consequently, these methods underperform when compared to LLM-based methods; and (3)
DSP has pioneered the introduction of the informal proof, a critical step in the LLM-based formal
theorem proving task. However, human-written informal proofs do not offer optimal compatibility
with large language models. Our method, grounded in the subgoal-learning theory, is capable of
inferring subgoal-based proofs that are more amenable to large language models.
**4** **Analysis**
**4.1** **Ablation Study**
In our ablation study, we examine four variations of our model on the miniF2F dataset, as detailed in
Table 2. The models include our full method (Ours), and three variants with either the subgoal-based
proof, demonstration reorganization, or both components removed.
Our full model achieves the highest performance on the test set. This underscores the importance of
both subgoal-based proof and demonstration reorganization. The model without both components
showed the lowest performance, further emphasizing the significance of these components. The
-----
Problems Solved on the miniF2F-test
110 subgoal+diff
informal+diff
subgoal
105 informal
100
95
# Problems Solved 90
85
80
20 40 60 80 100
# LLM Calls Per Problem
Problems Solved on the miniF2F-test
diffusion
110 gnn
topk
105
100
95
# Problems Solved
90
85
20 30 40 50 60 70 80 90 100
# LLM Calls Per Problem
(a) Subgoal-based Proof
(b) Demonstration Reorganization
Figure 2: Number of problems solved on miniF2F-test against the number of LLM calls per problem.
**Left: a comparative assessment between the informal proof and subgoal-based proof under two**
distinct conditions: presence and absence of the diffusion model. Right: a comparative exploration
of different in-context learning methods.
models missing either the subgoal-based proof or reorganization components also show decreased
performance, indicating the substantial role of each component.
**4.2** **On the Effect of Subgoal-based Proof**
We further use four different variants to explore the impact of subgoal-based proof. Figure 2(a)
displays the results of this experiment, where “informal” denotes the utilization of informal proofs
instead of subgoal-based proof, and “diff” indicates the integration of demonstration reorganization.
The results indicate a significant difference between the approaches that incorporate subgoal-based
proof (“subgoal” and “subgoal+diff”) and those that do not (“informal” and “informal+diff”). This
trend remains consistent across all LLM call numbers, suggesting a noteworthy positive effect of
subgoal-based proof on the overall performance of our method.
**4.3** **On the Effect of Demonstration Reorganization**
To further investigate the effect of a diffusion model for demonstration reorganization, we draw a
comparative analysis between its performance and two alternative in-context learning methodologies:
the Graph Neural Network (GNN) and the Top-K. The GNN is congruent with a modified version
of our proposed model when the inference diffusion step is set to 1, while the efficacy of the Top-K
methodology has been extensively substantiated in the literature [25]. Figure 2(b) presents the
empirical results, manifesting that the diffusion model’s performance increment diminishes as the
number of LLM calls escalates to 100. This phenomenon stems from the fact that the module is
trained on data collated from successful proofs via randomized organization sampling. Consequently,
it may encounter difficulties in discerning the optimal organization for data that deviates significantly
from its training dataset. Nevertheless, this limitation does not overshadow the potential of diffusion
models to economize the number of LLM calls. Notably, with demonstration reorganization, our
method exhibits an impressive capability of successfully deriving proofs for 94 problems (equivalently,
a pass rate of 38.5%), with a mere 20 LLM calls. Remarkably, this result is comparable with that of
the DSP method, which necessitates 5 the number of LLM calls.
×
**4.4** **Case Study**
To better comprehend the efficacy of our proposed method, we present a formal sketch of a problem
that remains unproven by earlier state-of-the-art methods. As demonstrated in Figure 3, it is apparent
that our strategy successfully decomposes the complex objective into three manageable subgoals,
each solvable by the LLM. We provide additional comprehensive examples in Appendix C.
-----
**Statement: Suppose n is a natural number such that n leaves a remainder of 3 when divided by 7. Show that (2n+1) leaves a**
remainder of 0 when divided by 7.
**Subgoal-based Proof:**
Step 1: Express n as 7𝑘+ 3 for some k ∈ℕ
Step 2: Substitute the expression for 𝑛 into the expression for 2𝑛+ 1 and simplify.
Step 3: Show that the resulting expression is divisible by 7.
**Formal Sketch:**
mathd_numbertheory_370:
n :: nat
h0 :
-
(* Step 1: Express n as 7𝑘+ 3 for some k ∈ℕ. *)
div_mult_mod_eq[of "n" "7"]
h0
c0:
(* Step 2: Substitute the expression for 𝑛 into the expression for 2𝑛+ 1 and simplify. *)
c0
c1:
(* Step 3: Show that the resulting expression is divisible by 7.*)
c1
Figure 3: A formal sketch generated by our proposed method.
**5** **Related Work**
**5.1** **Machine Learning for Formal Theorem Proving**
Machine learning-based formal theorem proving systems primarily fall into two categories: those
focusing on proof search strategies and premise selection, and those harnessing Large Language
Models (LLMs) for autoformalization and proof generation. The first category, represented by works
like Expert Iteration [37] and PACT [11], devise novel learning strategies to enhance proof search,
extracting self-supervised data from kernel-level proof terms. Systems such as HyperTree Proof
Search (HTPS)[21] and Thor[16] integrate language models with automated theorem provers, while
Magnushammer [27] presents a transformer-based approach for premise selection. While these
techniques have proven effective, they struggle with increasing computational costs as theorems
grow more complex. The second category exploits the potential of LLMs in the formalization of
mathematical proofs. Both Wu et al. [46] and Jiang et al. [15] demonstrate that LLMs can convert
mathematical problems into formal specifications, with the latter utilizing these translations to guide
an automated prover. Baldur [8] extends this approach by generating entire proofs at once and
introducing a proof repair model to enhance proving power. However, these approaches have yet
to fully leverage the power of LLMs due to a lack of emphasis on the format and organization
of demonstration examples. Our work aims to address this gap by introducing a subgoal-based
demonstration learning framework that refines the use of LLMs in formal theorem proving.
**5.2** **In-context Learning**
In the field of In-Context Learning (ICL), research primarily focuses on two main areas: (1) the
selection of in-context examples, and (2) the arrangement of these examples in the learning context.
With regard to the first area, Liu et al. [25] suggest a retrieval-based prompt selection method, offering
a thoughtful alternative to random example selection. This method aims to find examples that are
semantically similar to a test sample to form related prompts. Building on this idea, Rubin et al. [38]
propose an effective retrieval process for prompts, using a pre-trained language model. Sorensen et al.
[40] further the exploration by introducing a new way to select prompt templates that don’t need
labeled examples or direct access to the model. Instead, they choose the template that maximizes the
-----
mutual information between the input and the corresponding model output. Su et al. [41] present a
two-step framework that is efficient in annotation. It first selects a set of examples from unlabeled
data, and then retrieves task examples from the annotated set during testing. Lastly, Agrawal et al. [1]
focus on creating strategies specifically for machine translation tasks, emphasizing the importance of
the quality and domain of in-context examples, and warning against the negative effects of unrelated
noisy examples. Works in the second area examine the significance of the order in which prompts
are presented. Zhao et al. [51] point out the instability in few-shot learning caused by the order of
training examples and suggest a calibration method to tackle this. Lu et al. [26] delve deeper into
this analysis, demonstrating the sensitivity of prompt order in few-shot learning situations. Even
though previous efforts have made remarkable progress in either choosing or sequencing in-context
examples, our research sets a new precedent by combining both elements. In this paper, we step out
of these isolated areas of concentration, looking into an approach based on diffusion models that
effectively tackles both the challenges of selection and ordering at the same time.
**5.3** **Subgoal Learning**
Subgoal learning is a pivotal concept in reinforcement learning. It can enable AI systems to solve
complex, long-horizon tasks more effectively. Crucially, theoretical analyses have shed light on key
concepts including the computational benefits of rewarding subgoals [48], the structure of Markov
decision processes beneficial for hierarchical reinforcement learning [45], the complexity of optimal
option selection for planning [17], and the integration of temporal abstraction into RL [9]. Empirical
analyses in this field mainly focus on subgoal exploration, subgoal generation for planning, and
curriculum learning for subgoals. Subgoal exploration aims to find the optimal or efficient exploration
of subgoals, employing a variety of strategies. These include minimizing cover time [18], learning
dynamical distances [12], maximizing entropy [35], and utilizing asymmetric self-play [30]. Subgoal
planning research encompasses diverse algorithms for improved decision-making. For example,
SoRB [7] uses RL to build a graph for subgoal sequences, DC-MCTS [31] applies learned subgoal proposals to partition tasks, PAIR [24] combines online RL and offline supervised learning,
and Moro et al. [28] extend MCTS with Hindsight Experience Replay for goal-oriented planning.
The research centered on curriculum learning proposes various techniques to create a learning
curriculum that gradually intensifies subgoal complexity, thereby optimizing learning efficiency and
effectiveness [50, 49]. While there have been preliminary efforts to apply similar principles in the
construction of prompts for LLMs [19], the deployment of subgoal learning theories to manage
intricate tasks, such as formal theorem proving, remains largely unexplored. Our work pioneers the
use of subgoal learning in this domain, with a focus on format and organization.
**6** **Conclusion & Discussion**
In this paper, we have developed a subgoal-based demonstration learning framework that significantly
enhances LLMs’ efficacy in formal theorem proving. Our approach combines insights from subgoal
learning and diffusion models, effectively addressing the challenges of demonstration formatting and
organization. As a result, we achieve a 17.0% relative improvement in proof pass rate on the miniF2F
benchmark and a 5 improvement in sampling efficiency. Our work lays the foundation for future
endeavors in leveraging AI for generating, validating, and contributing novel insights to automated
theorem proving. ×
Despite the significant advancements achieved through our subgoal-based demonstration learning
framework, several limitations of our work exist. Firstly, the process of transforming informal proofs
into subgoal-based proofs is an iterative procedure involving interaction with ChatGPT, which may
introduce noise and inconsistencies. As our methodology relies on this transformation process, errors
introduced at this stage may propagate and affect the final result. Secondly, while the diffusion models
we adopted were effective in organizing the demonstrative in-examples, they are computationally
demanding. This can pose challenges for real-time or resource-constrained applications. Lastly, we
only evaluated our framework on the miniF2F dataset. We are expecting to see its performance on
other benchmarks and more complex, undergraduate-level mathematical problems [3].
-----
**References**
[1] Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, and Marjan Ghazvininejad.
In-context examples selection for machine translation. arXiv preprint arXiv:2212.02437, 2022.
[2] Jacob Austin, Daniel D Johnson, Jonathan Ho, Daniel Tarlow, and Rianne van den Berg.
Structured denoising diffusion models in discrete state-spaces. Advances in Neural Information
_Processing Systems, 34:17981–17993, 2021._
[3] Zhangir Azerbayev, Bartosz Piotrowski, Hailey Schoelkopf, Edward W Ayers, Dragomir Radev,
and Jeremy Avigad. Proofnet: Autoformalizing and formally proving undergraduate-level
mathematics. arXiv preprint arXiv:2302.12433, 2023.
[4] Xavier Bresson and Thomas Laurent. An experimental study of neural networks for variable
graphs. 2018.
[5] Davide Castelvecchi et al. Mathematicians welcome computer-assisted proof in ‘grand unification’theory. Nature, 595(7865):18–19, 2021.
[6] Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, and Jakob von Raumer.
The lean theorem prover (system description). In Automated Deduction-CADE-25: 25th Inter_national Conference on Automated Deduction, Berlin, Germany, August 1-7, 2015, Proceedings_
_25, pages 378–388. Springer, 2015._
[7] Ben Eysenbach, Russ R Salakhutdinov, and Sergey Levine. Search on the replay buffer:
Bridging planning and reinforcement learning. Advances in Neural Information Processing
_Systems, 32, 2019._
[8] Emily First, Markus N Rabe, Talia Ringer, and Yuriy Brun. Baldur: Whole-proof generation
and repair with large language models. arXiv preprint arXiv:2303.04910, 2023.
[9] Ronan Fruit, Matteo Pirotta, Alessandro Lazaric, and Emma Brunskill. Regret minimization
in mdps with options without prior knowledge. Advances in Neural Information Processing
_Systems, 30, 2017._
[10] Alexandros Graikos, Nikolay Malkin, Nebojsa Jojic, and Dimitris Samaras. Diffusion models
as plug-and-play priors. arXiv preprint arXiv:2206.09012, 2022.
[11] Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, and Stanislas Polu. Proof artifact
co-training for theorem proving with language models. arXiv preprint arXiv:2102.06203, 2021.
[12] Kristian Hartikainen, Xinyang Geng, Tuomas Haarnoja, and Sergey Levine. Dynamical distance
learning for semi-supervised and unsupervised skill discovery. arXiv preprint arXiv:1907.08225,
2019.
[13] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset.
_arXiv preprint arXiv:2103.03874, 2021._
[14] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training
by reducing internal covariate shift. In International conference on machine learning, pages
448–456. pmlr, 2015.
[15] Albert Q Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timothée
Lacroix, Yuhuai Wu, and Guillaume Lample. Draft, sketch, and prove: Guiding formal theorem
provers with informal proofs. arXiv preprint arXiv:2210.12283, 2022.
[16] Albert Qiaochu Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz
Odrzygó´zd´z, Piotr Miło´s, Yuhuai Wu, and Mateja Jamnik. Thor: Wielding hammers to integrate
language models and automated theorem provers. Advances in Neural Information Processing
_Systems, 35:8360–8373, 2022._
[17] Yuu Jinnai, David Abel, David Hershkowitz, Michael Littman, and George Konidaris. Finding
options that minimize planning time. In International Conference on Machine Learning, pages
3120–3129. PMLR, 2019.
-----
[18] Yuu Jinnai, Jee Won Park, David Abel, and George Konidaris. Discovering options for exploration by minimizing cover time. In International Conference on Machine Learning, pages
3130–3139. PMLR, 2019.
[19] Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and
Ashish Sabharwal. Decomposed prompting: A modular approach for solving complex tasks.
_arXiv preprint arXiv:2210.02406, 2022._
[20] Gerwin Klein, Kevin Elphinstone, Gernot Heiser, June Andronick, David Cock, Philip Derrin,
Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael Norrish, et al. sel4: Formal
verification of an os kernel. In Proceedings of the ACM SIGOPS 22nd symposium on Operating
_systems principles, pages 207–220, 2009._
[21] Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury
Hayat, Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neural
theorem proving. Advances in Neural Information Processing Systems, 35:26337–26349, 2022.
[22] Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay
Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, et al. Solving
quantitative reasoning problems with language models. arXiv preprint arXiv:2206.14858, 2022.
[23] Alexander C Li, Mihir Prabhudesai, Shivam Duggal, Ellis Brown, and Deepak Pathak. Your
diffusion model is secretly a zero-shot classifier. arXiv preprint arXiv:2303.16203, 2023.
[24] Yunfei Li, Tian Gao, Jiaqi Yang, Huazhe Xu, and Yi Wu. Phasic self-imitative reduction
for sparse-reward goal-conditioned reinforcement learning. In International Conference on
_Machine Learning, pages 12765–12781. PMLR, 2022._
[25] Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen.
What makes good in-context examples for gpt-3? arXiv preprint arXiv:2101.06804, 2021.
[26] Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. Fantastically
ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity. arXiv
_preprint arXiv:2104.08786, 2021._
[27] Maciej Mikuła, Szymon Antoniak, Szymon Tworkowski, Albert Qiaochu Jiang, Jin Peng
Zhou, Christian Szegedy, Łukasz Kuci´nski, Piotr Miło´s, and Yuhuai Wu. Magnushammer: A
transformer-based approach to premise selection. arXiv preprint arXiv:2303.04488, 2023.
[28] Lorenzo Moro, Amarildo Likmeta, Enrico Prati, Marcello Restelli, et al. Goal-directed planning
via hindsight experience replay. In International Conference on Learning Representations,
pages 1–16, 2022.
[29] OpenAI. GPT-4 Technical Report. arXiv e-prints, art. arXiv:2303.08774, March 2023. doi:
10.48550/arXiv.2303.08774.
[30] OpenAI OpenAI, Matthias Plappert, Raul Sampedro, Tao Xu, Ilge Akkaya, Vineet Kosaraju,
Peter Welinder, Ruben D’Sa, Arthur Petron, Henrique P d O Pinto, et al. Asymmetric self-play
for automatic goal discovery in robotic manipulation. arXiv preprint arXiv:2101.04882, 2021.
[31] Giambattista Parascandolo, Lars Buesing, Josh Merel, Leonard Hasenclever, John Aslanides,
Jessica B Hamrick, Nicolas Heess, Alexander Neitz, and Theophane Weber. Divide-and-conquer
monte carlo tree search for goal-directed planning. arXiv preprint arXiv:2004.11410, 2020.
[32] Lawrence C Paulson. Isabelle: A generic theorem prover. Springer, 1994.
[33] Lawrence C Paulson. Isabelle: The next 700 theorem provers. arXiv preprint cs/9301106, 2000.
[34] Lawrence C Paulsson and Jasmin C Blanchette. Three years of experience with sledgehammer,
a practical link between automatic and interactive theorem provers. In Proceedings of the 8th
_International Workshop on the Implementation of Logics (IWIL-2010), Yogyakarta, Indonesia._
_EPiC, volume 2, 2012._
-----
[35] Silviu Pitis, Harris Chan, Stephen Zhao, Bradly Stadie, and Jimmy Ba. Maximum entropy gain
exploration for long horizon multi-goal reinforcement learning. In International Conference on
_Machine Learning, pages 7750–7761. PMLR, 2020._
[36] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_arXiv preprint arXiv:2009.03393, 2020._
[37] Stanislas Polu, Jesse Michael Han, Kunhao Zheng, Mantas Baksys, Igor Babuschkin, and Ilya
Sutskever. Formal mathematics statement curriculum learning. arXiv preprint arXiv:2202.01344,
2022.
[38] Ohad Rubin, Jonathan Herzig, and Jonathan Berant. Learning to retrieve prompts for in-context
learning. arXiv preprint arXiv:2112.08633, 2021.
[39] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. Mastering the game of go with deep neural networks and tree search. nature, 529(7587):484–489,
2016.
[40] Taylor Sorensen, Joshua Robinson, Christopher Michael Rytting, Alexander Glenn Shaw,
Kyle Jeffrey Rogers, Alexia Pauline Delorey, Mahmoud Khalil, Nancy Fulda, and David
Wingate. An information-theoretic approach to prompt engineering without ground truth labels.
_arXiv preprint arXiv:2203.11364, 2022._
[41] Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang,
Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, et al. Selective annotation makes language
models better few-shot learners. arXiv preprint arXiv:2209.01975, 2022.
[42] Zhiqing Sun and Yiming Yang. Difusco: Graph-based diffusion solvers for combinatorial
optimization. arXiv preprint arXiv:2302.08224, 2023.
[43] OpenAI Team. Chatgpt: Optimizing language models for dialogue, 2022.
[44] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information
_processing systems, 30, 2017._
[45] Zheng Wen, Doina Precup, Morteza Ibrahimi, Andre Barreto, Benjamin Van Roy, and Satinder
Singh. On efficiency in hierarchical reinforcement learning. Advances in Neural Information
_Processing Systems, 33:6708–6718, 2020._
[46] Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Rabe, Charles Staats, Mateja Jamnik,
and Christian Szegedy. Autoformalization with large language models. Advances in Neural
_Information Processing Systems, 35:32353–32368, 2022._
[47] Zhiyong Wu, Yaoxiang Wang, Jiacheng Ye, and Lingpeng Kong. Self-adaptive in-context
learning. arXiv preprint arXiv:2212.10375, 2022.
[48] Yuexiang Zhai, Christina Baek, Zhengyuan Zhou, Jiantao Jiao, and Yi Ma. Computational benefits of intermediate rewards for goal-reaching policy learning. Journal of Artificial Intelligence
_Research, 73:847–896, 2022._
[49] Tianjun Zhang, Benjamin Eysenbach, Ruslan Salakhutdinov, Sergey Levine, and Joseph E
Gonzalez. C-planning: An automatic curriculum for learning goal-reaching tasks. arXiv preprint
_arXiv:2110.12080, 2021._
[50] Yunzhi Zhang, Pieter Abbeel, and Lerrel Pinto. Automatic curriculum learning through value
disagreement. Advances in Neural Information Processing Systems, 33:7648–7659, 2020.
[51] Zihao Zhao, Eric Wallace, Shi Feng, Dan Klein, and Sameer Singh. Calibrate before use:
Improving few-shot performance of language models. In International Conference on Machine
_Learning, pages 12697–12706. PMLR, 2021._
[52] Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark for
formal olympiad-level mathematics. arXiv preprint arXiv:2109.00110, 2021.
-----
**A** **More Details about Subogal-based Proof**
We provide a detailed description on the subgoal refinement method (see §2.1) through Algorithm 1
and Algorithm 2. In the k-th iteration, we construct demonstration examples _Ei[(][k][)]_ _i_ 1 [using]
improved subgoal-based proofs. To construct Ei[(][k][)], we first extract the statement and formal sketch=
{ }[N]
from Ei[(][k][−][1][)], then use an LLM to generate subgoals. Afterward, a Refine module is called to confirm
the validity of the created subgoals and adjust any subgoals identified as infeasible.
We present an example to elucidate this process further (see Figures 4 to 12).[3] As shown in Figure 4,
the LLM creates two subgoals for the theorem amc12a_2003_p4, leading to _s0,s1,s2,s3_ . Refining
these subgoals involves calling verify_and_correct _s0,s1_ to improve the subgoal s1. This is
depicted in Figures 5 to 12. We first use the LLM to reconstruct the subgoal related to the first { }
step, but this attempt fails (Figure 5). Then we break down the subgoal( ) _s1 into three more detailed_
subgoals (Figure 6), each of which is then verified using the same LLM (Figures 7 to 9). Due
to the unsuccessful reconstruction of the second subgoal (Figure 8), it is further broken down
into two more specific subgoals (Figure 10). The last two subgoals pass the verification process
successfully (Figures 11 and 12). Finally, the output of verify_and_correct _s0,s1_, namely S[0][→][1],
is defined as the set that includes the steps from 1 to 4 shown in Figure 12.
( )
**Algorithm 1 Iterative Subgoal Refinement**
**Requires:** EXTRACT extraction of statement and formal sketch
COMPOSE composing of a statement, formal sketch
and subgoals to form a demonstration example
INITIALIZE_SUBGOALS generate subgoals with a LLM
**function ITERATIVE_REFINEMENT(** _E1[(][0][)][,E]2[(][0][)][,]_ [⋯][,E]N[(][0][)]
**for k in 1,** 2,...,K do
**for i in 1,** 2,...,N do { [}][)]
_x, y_ EXTRACT _Ei[(][k][−][1][)]_
_s0,s1,_ _,s∆,s∆_ 1 INITIALIZE_SUBGOALS _x,y,E[(][k][−][1][)]_
← ( )
_S[0][→(][∆][+][1][)]_ REFINE+ _s0,s∆_ 1, _s1,s2,_ _,s∆_
_Ei[(][k][)]_ ⋯COMPOSE ←x,y,S[0][→(]+[∆][+][1][)] ( )
← (( { ⋯ }))
**end for**
**end for** ← ( )
**return** _E1[(][K][)],E2[(][K][)],_ _,EN[(][K][)]_
**end function**
{ ⋯ }
**Algorithm 2 Refinement Algorithm**
**Requires:** VERIFY_AND_CORRECT verify the validness of the subgoals and correct them
if necessary
**function REFINE(si, sj** 1, _si_ 1, _,sj_ )
**if i = j then return VERIFY_AND_CORRECT** _si,si_ 1
+ +
**end if** { ⋯ }
+
_S[i][→][i][+][1]_ REFINE _si,si_ 1, ( )
_S[i][+][1][→][j][+][1]_ REFINE _si_ 1,sj 1, _si_ 2, _,sj_
+
**return S←[i][→][i][+][1]** _S([i][+][1][→][j][+][1]_ {})
+ + +
**end function** ← ( { ⋯ })
∪
3To simplify the illustration, we leave out redundant demonstration examples.
-----
Your task is to transform a formal proof, written in a style suitable for a theorem-proving system, into a more explanatory and narrative
style, suitable for humans. The task involves adding explanatory comments that break down the formal proof into understandable steps,
providing context and insight into the underlying logical structure and reasoning.
**Input:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
define x y where "x=ln a" and "y=ln b"
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
**Output:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: With the definitions 𝑥= 𝑙𝑛𝑎, 𝑦= 𝑙𝑛𝑏, and the constraints 𝑦> 0 and 𝑥≥𝑦, the equation simplifies into 2 −_ [!]" [−] !["][. *)]
define x y where "x=ln a" and "y=ln b"
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 2: Using the inequality (_ !" [−] !"[)][#][≥0][, it is shown that the simplified equation ][2 −] [!]" [−] !["] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
Figure 4: Illustration of the Initialize_subgoals function as denoted in Algorithm 1. ChatGPT is
leveraged to generate the subgoal-based proof with respect to a formal sketch.
-----
Your task is to complete the missing part of a Isabelle proof based on the corresponding description provided. The proof will consist of
several steps, and you will need to use your knowledge of mathematical concepts and logical reasoning to fill in the gap.
**Input:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: With the definitions 𝑥= 𝑙𝑛𝑎, 𝑦= 𝑙𝑛𝑏, and the constraints 𝑦> 0 and 𝑥≥𝑦, the equation simplifies into 2 −_ [!]" [−] !["][. *)]
_(* Step 2: Using the inequality (_ !" [−] !"[)][#][≥0][, it is shown that the simplified equation ][2 −] [!]" [−] !["] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
**Output:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: With the definitions 𝑥= 𝑙𝑛𝑎, 𝑦= 𝑙𝑛𝑏, and the constraints 𝑦> 0 and 𝑥≥𝑦, the equation simplifies into 2 −_ [!]" [−] !["][. *)]
define x y where "x=ln a" and "y=ln b"
have "x > 0" and "y > 0"
using assms by (simp_all add: ln_gt_zero)
have "?L = 2 - y / x - x / y"
using assms x_def y_def by (simp add: add_divide_distrib diff_divide_distrib ln_div)
_(* Step 2: Using the inequality (_ !" [−] !"[)][#][≥0][, it is shown that the simplified equation ][2 −] [!]" [−] !["] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
Figure 5: An instance of the “verify” component within the Verify_and_correct function in Algorithm 2. ChatGPT encounters a failure in reconstructing the proof associated with step 1, thereby
deeming it an unsuitable subgoal.
-----
The Isabelle Proof Assistant was unable to verify Step 1. To resolve this, consider providing more detailed information or clarity to
enhance the explanation for this particular step.
**Input:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: With the definitions 𝑥= 𝑙𝑛𝑎, 𝑦= 𝑙𝑛𝑏, and the constraints 𝑦> 0 and 𝑥≥𝑦, the equation simplifies into 2 −_ [!]" [−] !["][. *)]
define x y where "x=ln a" and "y=ln b"
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 2: Using the inequality (_ !" [−] !"[)][#][≥0][, it is shown that the simplified equation ][2 −] [!]" [−] !["] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
**Output:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: With the conditions set as 𝑦> 0 and 𝑥≥𝑦, we transition the equation to be framed in terms of 𝑥_ _and 𝑦, thereby yielding_
"$!
" [+][ !$"]! _[. *) ]_
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
_(* Step 3: Simplify the expression to 2 −_ [!]" [−] !["][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 4: Using the inequality (_ !" [−] !"[)][#][≥0][, it is shown that the simplified equation ][2 −] [!]" [−] !["] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
Figure 6: An instance of the “correct” component within the Verify_and_correct function in Algorithm 2. ChatGPT works on the decomposition of the original subgoal (i.e., step 1 in the input) into a
series of more granular subgoals (i.e., step 1 - 3 in the output).
-----
Your task is to complete the missing part of a Isabelle proof based on the corresponding description provided. The proof will consist of
several steps, and you will need to use your knowledge of mathematical concepts and logical reasoning to fill in the gap.
**Input:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
_(* Step 2: With the conditions set as 𝑦> 0 and 𝑥≥𝑦, we transition the equation to be framed in terms of 𝑥_ _and 𝑦, thereby yielding_
!"#
! [+][ #"!]# _[. *) ]_
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
_(* Step 3: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 4: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
**Output:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: With the conditions set as 𝑦> 0 and 𝑥≥𝑦, we transition the equation to be framed in terms of 𝑥_ _and 𝑦, thereby yielding_
!"#
! [+][ #"!]# _[. *) ]_
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
_(* Step 3: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 4: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
Figure 7: An instance of the “verify” component within the Verify_and_correct function in Algorithm 2. ChatGPT successfully reconstructs the proof associated with step 1, thus validating it as a
viable subgoal.
-----
Your task is to complete the missing part of a Isabelle proof based on the corresponding description provided. The proof will consist of
several steps, and you will need to use your knowledge of mathematical concepts and logical reasoning to fill in the gap.
**Input:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: With the conditions set as 𝑦> 0 and 𝑥≥𝑦, we transition the equation to be framed in terms of 𝑥_ _and 𝑦, thereby yielding_
!"#
! [+][ #"!]# _[. *) ]_
_(* Step 3: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 4: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
**Output:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: With the conditions set as 𝑦> 0 and 𝑥≥𝑦, we transition the equation to be framed in terms of 𝑥_ _and 𝑦, thereby yielding_
!"#
! [+][ #"!]# _[. *) ]_
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge> y"
using \<open>b\<le>a\<close> unfolding x_def y_def using ln_mono sledgehammer
hence "(x - y)/x + (y - x)/y = 2 - y/x - x/y"
using \<open>x>0\<close> \<open>y>0\<close> by (simp add: diff_divide_distrib)
_(* Step 3: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 4: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
Figure 8: An instance of the “verify” component within the Verify_and_correct function in Algorithm 2. ChatGPT encounters a failure in reconstructing the proof associated with step 2, thereby
deeming it an unsuitable subgoal.
-----
Your task is to complete the missing part of a Isabelle proof based on the corresponding description provided. The proof will consist of
several steps, and you will need to use your knowledge of mathematical concepts and logical reasoning to fill in the gap.
**Input:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: With the conditions set as 𝑦> 0 and 𝑥≥𝑦, we transition the equation to be framed in terms of 𝑥_ _and 𝑦, thereby yielding_
!"#
! [+][ #"!]# _[. *) ]_
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
_(* Step 3: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
_(* Step 4: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
**Output:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: With the conditions set as 𝑦> 0 and 𝑥≥𝑦, we transition the equation to be framed in terms of 𝑥_ _and 𝑦, thereby yielding_
!"#
! [+][ #"!]# _[. *) ]_
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
_(* Step 3: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 4: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
Figure 9: An instance of the “verify” component within the Verify_and_correct function in Algorithm 2. ChatGPT successfully reconstructs the proof associated with step 3, thus validating it as a
viable subgoal.
-----
The Isabelle Proof Assistant was unable to verify Step 2. To resolve this, consider providing more detailed information or clarity to
enhance the explanation for this particular step.
**Input:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: With the conditions set as 𝑦> 0 and 𝑥≥𝑦, we transition the equation to be framed in terms of 𝑥_ _and 𝑦, thereby yielding_
!"#
! [+][ #"!]# _[. *) ]_
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
_(* Step 3: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 4: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
**Output:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: Show that 𝑦> 0 and 𝑥≥𝑦. *)_
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
_(* Step 3: Use the definitions of 𝑥_ _and 𝑦_ _to rewrite the expression as [!"#]!_ [+][ #"!]# _[. *)]_
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
_(* Step 4: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 5: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
Figure 10: An instance of the “correct” component within the Verify_and_correct function in
Algorithm 2. ChatGPT works on the decomposition of the original subgoal (specifically, step 2 in the
input) into a series of more granular subgoals (specifically, step 2 - 3 in the output).
-----
Your task is to complete the missing part of a Isabelle proof based on the corresponding description provided. The proof will consist of
several steps, and you will need to use your knowledge of mathematical concepts and logical reasoning to fill in the gap.
**Input:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: Show that 𝑦> 0 and 𝑥≥𝑦. *)_
_(* Step 3: Use the definitions of 𝑥_ _and 𝑦_ _to rewrite the expression as [!"#]!_ [+][ #"!]# _[. *)]_
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
_(* Step 4: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 5: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
**Output:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: Show that 𝑦> 0 and 𝑥≥𝑦. *)_
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
_(* Step 3: Use the definitions of 𝑥_ _and 𝑦_ _to rewrite the expression as [!"#]!_ [+][ #"!]# _[. *)]_
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
_(* Step 4: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 5: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
Figure 11: An instance of the “verify” component within the Verify_and_correct function in
Algorithm 2. ChatGPT successfully reconstructs the proof associated with step 2, thus validating it as
a viable subgoal.
-----
Your task is to complete the missing part of a Isabelle proof based on the corresponding description provided. The proof will consist of
several steps, and you will need to use your knowledge of mathematical concepts and logical reasoning to fill in the gap.
**Input:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: Show that 𝑦> 0 and 𝑥≥𝑦. *)_
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
_(* Step 3: Use the definitions of 𝑥_ _and 𝑦_ _to rewrite the expression as [!"#]!_ [+][ #"!]# _[. *)]_
_(* Step 4: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 5: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
**Output:**
theorem amc12a_2003_p24:
fixes a b::real
assumes "b \<le>a" and "1<b"
shows "ln (a/b) / ln a + ln (b/a) / ln b \<le>0" (is "?L \<le> _")
proof –
_(* Step 1: Define 𝑥= 𝑙𝑛𝑎_ _and 𝑦= 𝑙𝑛𝑏. *)_
define x y where "x=ln a" and "y=ln b"
_(* Step 2: Show that 𝑦> 0 and 𝑥≥𝑦. *)_
have "y>0"
using \<open>b>1\<close> unfolding y_def using ln_gt_zero sledgehammer
moreover have "x \<ge>y"
using \<open>a\<ge>b\<close> unfolding x_def y_def using assms(2) sledgehammer
ultimately have "x>0"
sledgehammer
_(* Step 3: Use the definitions of 𝑥_ _and 𝑦_ _to rewrite the expression as [!"#]!_ [+][ #"!]# _[. *)]_
have "?L = (x-y)/x + (y-x)/y"
apply (subst (1 2) ln_div)
using assms unfolding x_def y_def sledgehammer
_(* Step 4: Simplify the expression to 2 −_ [#]! [−] #[!][. *)]
also have "... = 2 - (y/x + x/y)"
using \<open>x>0\<close> \<open>y>0\<close> sledgehammer
_(* Step 5: Using the inequality (_ #! [−] #![)][$][≥0][, it is shown that the simplified equation ][2 −] [#]! [−] #[!] _[is less than or equal to 0. *)]_
also have "... \<le> 0"
proof –
have "sqrt (x/y) * sqrt (y/x) = 1"
sledgehammer
have "0\<le> (sqrt (x/y) - sqrt (y/x))^2"
sledgehammer
also have "... = x / y - 2 + y / x"
unfolding power2_eq_square using \<open>x>0\<close> \<open>y>0\<close> \<open>sqrt (x/y) * sqrt (y/x) = 1\<close>
sledgehammer
finally show ?thesis
sledgehammer
qed
finally show ?thesis.
qed
Figure 12: An instance of the “verify” component within the Verify_and_correct function in
Algorithm 2. ChatGPT successfully reconstructs the proof associated with step 3, thus validating it as
a viable subgoal.
-----
**B** **More Details about Demonstration Reorganization**
**B.1** **Parameterization**
In alignment with Austin et al. [2], we adopt discrete diffusion models to model binary random
variables. Explicitly, the forward process is given by:
_q_ **_ψt_** **_ψt_** 1 Cat **_ψt;_** **p** _δ_ **_ψt_** 1 **_Qt_** _,_ (5)
− − _βt_
where δ **_ψ_** symbolizes the one-hot encoding of( ∣ ) = **_ψ(_**, Qt = ( _βt_ ) )1 _βt_
matrix,step t and the posterior at step β( _t corresponds to the corruption ratio and satisfies that)_ _t_ 1 can be articulated as: = [[(][1][ −] _[β][t][)]_ _t_ 1([(][1] −[ −] _[β])[t][) ≈][]][ denotes the transition][0][. The marginal at]_
=
∏[T]
_q_ **_ψt_** **_ψ0_** Cat **_ψt;_** **p** _δ_ **_ψ0_** **_Qt_** _,_
−
(6)
_t_ 0[)][Q]t 1
_q_ **_ψt_** 1(ψt,∣ψ0) = Cat (ψt 1; =p ( _[δ][(][ψ])_ _[t][)][Q])_ [⊺] _,_
_δ_ **_ψ0_** **_Qtδ_** **_ψt_** −
where Qt **_Q1Q2 ...(_** **_Q−_** _t∣. In consonance with Austin et al.) =_ ( − = [2][⊙] _[δ], we employ a denoising neural[(][ψ]_ )
( ) ( )[⊺]
network which is tasked with the prediction of p **_ψ0_** **_ψt_**, thereby enabling the parameterization of
the reverse process: =
( ∣ )
_pθ_ **_ψt_** 1 **_ψt,x_** _q_ **_ψt_** 1 **_ψt,_** **_ψ0_** _pθ_ **_ψ0_** **_ψt,x_** _._ (7)
**_ψ_**
− −
( ∣ ) ∝∑ ( ∣ ) ( ∣ )
**B.2** **Implementation of GNN**
Our work employs a modified version of GNN, a model that exhibits anisotropic characteristics and
is enhanced by edge gating methodologies [4, 42]. We define t as sinusoidal representations [44]
associated with the denoising timestep t. Consider h[ℓ]i [and][ e]ij[ℓ] [as the features of node][ i][ and edge][ ij][ at]
a specific layer ℓ, respectively. During the transition between layers, these features disseminate via an
anisotropic message propagation paradigm as follows:
**_eˆ[ℓ]ij[+][1]_** **_P_** _[ℓ]e[ℓ]ij_ _i_ _j[,]_
**_e[ℓ]ij[+][1]_** **_e[ℓ]ij_** **_e[ℓ]ij[+][1]_** (8)
= [+][ Q][ℓ][h][ℓ] [+][ R][ℓ][h][ℓ]
**_h[ℓ]i[+][1]_** **_h[ℓ]i_** _i_ _i_ [(][σ][(]e[ˆ][ℓ]ij[+][1] _j[)))][,]_
= [+][ MLP][e][(][BN][(][ˆ] [)) +][ MLP][t][(][t][)][,]
where P _[ℓ],_ **_Q[ℓ],_** **_R[ℓ],_** **_U_** _[ℓ]=,_ **_V_** _[ℓ]_ [+][ ReLU]R[d][×][d] denote layer-specific learnable parameters with[(][BN][(][U][ ℓ][h][ℓ] [+][ SUM][j][∈N] [) ⊙] **_[V][ ℓ][h][ℓ]_** _d denoting the_
dimension of hidden state. BN signifies the Batch Normalization operation [14], while SUM represents sum pooling. designates the Hadamard product, and∈ _i encapsulates the set of neighboring_
nodes of node i. Lastly, a two-layer multi-layer perceptron is denoted by MLP .
⊙ N
In our experiments, we define h[0]i _i_ where W R(⋅)[d][×][3072] is a learnable
parameter. Ada _x_ _,_ Ada _Ei[(][K][)]_ R[1536][×][1] denote the ada embeddings [4] of the statement x and the
_i-th demonstration example, respectively. The operator[=][ W][ [][Ada][(][x][)][;Ada][(][E][(][K];_ [)])] denotes the concatenation operation ∈
between two vectors.( ) **_e[0]ij([are initialized as sinusoidal features of the edges.]) ∈_**
[⋅ ⋅]
**B.3** **Sampling Process**
A straightforward strategy for creating a demonstration organization is by directly sampling ψ
_pθ_ **_ψ0_** _x_ . However, this strategy introduces two key challenges: (1) A cycle in ψ might be
present, indicating that at least one demonstration example is selected multiple times; (2) ψ could ∼
include multiple separate sub-graphs, making it difficult to define the relative position between two( ∣ )
demonstration examples from two different sub-graphs. Taking a cue from treating diffusion models
as discriminative approaches [23], we start by randomly creating 200 potential solutions. Using
the diffusion model’s ability to provide conditional density estimates, we rate these 200 potential
solutions and select the one with the highest score to build the final demonstration organization. We
then reconstruct the sequence of demonstration examples from ψ, adding examples one by one into
the LLM context until we hit the length limit of the LLM.
[4https://platform.openai.com/docs/guides/embeddings](https://platform.openai.com/docs/guides/embeddings)
-----
**B.4** **Hyperparameters and Hardware Setup**
In the course of our experiment, we employ a 3-layer Anisotropic Graph Neural Network with a
hidden state dimensionality set to 256. We sweep the learning rate from 1e 4, 2e 4, 5e 4, 7e 4
and sweep batch size from 4, 8, 16, 32 . The processes of training and inference for the diffusion
models are conducted on a NVIDIA RTX 3090 GPU. [ − − − − ]
[ ]
**C** **Additional Examples**
We provide additional cases in Figure 13 to Figure 15. In Figure 13, our method proficiently identifies
viable subgoals, successfully guiding a clear path to the proof. This is accomplished by leveraging
pertinent demonstration examples which utilize operations of division and modulus. Akin to the
previous case, in Figure 14, our method accurately predicts potential subgoals. This is realized by
capitalizing on demonstration examples that employ the mathematical operations of squaring and
square root. Finally, in Figure 15, our method demonstrates its capacity to consistently discern viable
subgoals, thereby achieving a seamless completion of the proof structure.
-----
As a mathematician familiar with Isabelle, your task is to provide a formal proof in response to a given problem statement.
Your proof should be structured and clearly written, meeting the following criteria:
- It can be verified by Isabelle.
- Each step of the proof should be explained in detail using comments enclosed in "(*" and "*)".
- The explanation for each step should be clear and concise, avoiding any unnecessary or apologetic language.
Please ensure that your proof is well-organized and easy to follow, with each step building upon the previous one.
**Problem:**
Let 𝑛 be an integer and 𝑝 be a prime number. Show that 𝑝 divides 𝑛 if and only if 𝑛[!] is congruent to 0 modulo 𝑝.
**Proof:**
theorem numbertheory_prmdvsneqnsqmodpeq0:
fixes n :: int
and p :: nat
assumes "prime p"
shows "p dvd n \<longleftrightarrow> (n^2) mod p = 0"
_(* Step 1: Assume that 𝑝_ _divides 𝑛. Show that 𝑛[!]_ _is congruent to 0 modulo 𝑝._
_Step 2: Assume that 𝑛[!]_ _is congruent to 0 modulo 𝑝. Show that 𝑝_ _divides 𝑛. *)_
proof
_(* Step 1: Assume that 𝑝_ _divides 𝑛. Show that 𝑛[!]_ _is congruent to 0 modulo 𝑝.*)_
assume "p dvd n"
then show "(n^2) mod p = 0" sledgehammer
_(* Step 2: Assume that 𝑛[!]_ _is congruent to 0 modulo 𝑝. Show that 𝑝_ _divides 𝑛. *)_
next assume "(n^2) mod p = 0"
then show "p dvd n" sledgehammer
qed
**Problem:**
Prove that for any natural number 𝑛, 3 divides 𝑛["] + 2𝑛.
**Proof:**
theorem induction_divisibility_3divnto3m2n:
fixes n::nat
shows "3 dvd n^3 + 2 * n"
_(* Step 1: Prove the base case 𝑛= 0._
_Step 2: Assume the statement is true for some 𝑛_ _and prove it for 𝑛+ 1._
_Step 3: Expand (𝑛+ 1)["]+2(𝑛+ 1) and simplify._
_Step 4: Use the assumption that 3 divides 𝑛["]_ + 2𝑛 _to show that 3 divides 3𝑛[!]_ + 3𝑛+ 3.
_Step 5: Combine steps 3 and 4 to show that 3 divides (𝑛+ 1)["]+2(𝑛+ 1) ._
_Step 6: Conclude that the statement is true for all natural numbers 𝑛. *)_
proof (induct n)
_(* Step 1: Prove the base case 𝑛= 0.*)_
case 0
then show ?case sledgehammer
_(* Step 2: Assume the statement is true for some 𝑛_ _and prove it for 𝑛+ 1.*)_
next case (Suc n)
have ?case when "n=0"
using that sledgehammer
moreover have ?case when "n>0"
proof _(* Step 3: Expand (𝑛+ 1)["]+2(𝑛+ 1) and simplify.*)_
have "(n + 1)^3 + 2 * (n + 1) = (n^3+3*n^2+3*n+1) + (2*n+2)"
unfolding power3_eq_cube power2_eq_square sledgehammer
also have "... = n^3+2*n + 3*n^2+3*n+3" sledgehammer
finally have "(n+1)^3+2*(n+1) = n^3+2*n + 3*n^2+3*n+3".
_(* Step 4: Use the assumption that 3 divides 𝑛["]_ + 2𝑛 _to show that 3 divides 3𝑛[!]_ + 3𝑛+ 3.*)
moreover have "3 dvd n^3+2*n" using Suc(1) sledgehammer
moreover have "3 dvd 3*n^2+3*n+3" sledgehammer
_(* Step 5: Combine steps 3 and 4 to show that 3 divides (𝑛+ 1)["]+2(𝑛+ 1) .*)_
ultimately have "3 dvd (n+1)^3+2*(n+1)" sledgehammer
_(* Step 6: Conclude that the statement is true for all natural numbers 𝑛. *)_
then show ?thesis sledgehammer
qed
ultimately show ?case sledgehammer
qed
**Problem:**
Suppose 𝑛 is a natural number such that 𝑛 leaves a remainder of 3 when divided by 7. Show that (2𝑛+ 1) leaves a remainder of 0 when divided by 7.
**Proof:**
theorem mathd_numbertheory_370:
fixes n :: nat
assumes h0 : "n mod 7 = (3::nat)"
shows "(2*n+1) mod 7 = (0::nat)" **_Prompt_**
**_ChatGPT Output_**
_(* Step 1: Express 𝑛_ _as 7𝑘+ 3 for some 𝑘𝜖ℕ$._
_Step 2: Substitute the expression for 𝑛_ _into the expression for 2𝑛+ 1 and simplify._
_Step 3: Show that the resulting expression is divisible by 7. *)_
proof
_(* Step 1: Express 𝑛_ _as 7𝑘+ 3 for some 𝑘𝜖ℕ$.*)_
have "n = 7 * (n div 7) + (n mod 7)" using div_mult_mod_eq[of "n" "7"] sledgehammer
also have "... = 7 * (n div 7) + 3" using h0 sledgehammer
finally have c0: "n = 7 * (n div 7) + 3".
_(* Step 2: Substitute the expression for 𝑛_ _into the expression for 2𝑛+ 1 and simplify.*)_
have "2*n+1 = 2*(7*(n div 7) + 3) + 1" using c0 sledgehammer
also have "... = 14*(n div 7) + 7" sledgehammer
also have "... = 7*(2*(n div 7) + 1)" sledgehammer
finally have c1: "2*n+1 = 7*(2*(n div 7) + 1)".
_(* Step 3: Show that the resulting expression is divisible by 7. *)_
show "(2*n+1) mod 7 = 0" using c1 sledgehammer
qed
Figure 13: A case from miniF2F-valid. Our method identifies viable subgoals and leverages demonstration examples with division and modulus operations to complete the proof.
-----
Your proof should be structured and clearly written, meeting the following criteria:
- It can be verified by Isabelle.
- Each step of the proof should be explained in detail using comments enclosed in "(*" and "*)".
- The explanation for each step should be clear and concise, avoiding any unnecessary or apologetic language.
Please ensure that your proof is well-organized and easy to follow, with each step building upon the previous one.
**Problem:**
Suppose that 𝑥 is a positive real number. Show that 2 − 2 ≥2 −𝑥− "#! [.]
**Proof:**
theorem algebra_amgm_faxinrrp2msqrt2geq2mxm1div2x:
fixes x :: real
assumes "x > 0"
shows "2 - sqrt 2 \<ge> 2 - x - 1/ (2 * x)"
_(* Step 1: Show that_ 2 ≤𝑥+ "#! _[.]_
_Step 2: Rearrange the inequality to get 2𝑥["]_ + 1 −2𝑥2 ≥0.
_Step 3: Factor the expression to get ( 2𝑥−1)["]≥0._
_Step 4: Conclude that the inequality is true. *)_
proof
_(* Step 1: Show that_ 2 ≤𝑥+ "#! _[.*)]_
have c0: "2 * x > 0" using assms sledgehammer
have "sqrt 2 \<le> x + 1 / (2 * x)"
proof
define y where "y = sqrt 2"
have c1: "2 = y * y"
proof
have "2 = (sqrt 2) * (sqrt 2)" sledgehammer
then have "... = y * y" using \<open>y = sqrt 2\<close> sledgehammer
then show ?thesis sledgehammer
qed
have "(2 * x) * x + 1 - (2 * x) * (sqrt 2) = (y * y * x * x) + 1 - (2 * x) * y" using c1 y_def sledgehammer
also have "... = (y*x) * (y*x) - 2 * (y*x) + 1" sledgehammer
also have "... = (y*x - 1) * (y*x - 1)" sledgehammer
also have "... \<ge> 0" sledgehammer
ultimately have c2: "(2 * x) * x + 1 - (2 * x) * (sqrt 2) \<ge> 0" sledgehammer
have "(2*x) * (x + 1/(2*x) - sqrt 2) = (2 * x) * x + (2 * x) * (1/(2*x)) - (2*x) * sqrt 2" sledgehammer
also have "... = (2 * x) * x + 1 - (2*x) * sqrt 2" using c0 sledgehammer
_(* Step 2: Rearrange the inequality to get 2𝑥["]_ + 1 −2𝑥2 ≥0.*)
also have "... \<ge> 0" using c2 sledgehammer
_(* Step 3: Factor the expression to get ( 2𝑥−1)["]≥0.*)_
ultimately have "(2*x) * (x + 1/(2*x) - sqrt 2) \<ge> 0" sledgehammer
hence "x + 1/(2*x) - sqrt 2 \<ge> 0" using mult_pos_pos sledgehammer
_(* Step 4: Conclude that the inequality is true. *)_
then show ?thesis sledgehammer
qed
then show ?thesis sledgehammer
qed
**Problem:**
Suppose that 𝑥 and 𝑦 are two real numbers such that 𝑥+ 𝑦= 13 and 𝑥𝑦= 24. Find 𝑥["] + 𝑦["] and show that it is equal to 11.
**Proof:**
theorem mathd_algebra_510:
fixes x y :: real
assumes h0 : "x+y=13"
and h1 : "x*y=24"
shows "sqrt (x^2 + y^2) = 11"
_(* Step 1: Use the identity 𝑥["]_ + 𝑦["] = 𝑥+ 𝑦 ["] −2𝑥𝑦 _to find 𝑥["]_ + 𝑦["] _in terms of 𝑥_ _and 𝑦._
_Step 2: Substitute the given values of 𝑥_ _and 𝑦_ _to find 𝑥["]_ + 𝑦["] _._
_Step 3: Show that 𝑥["]_ + 𝑦["] = 121.
_Step 4: Take the square root of both sides to find_ 𝑥["] + 𝑦["].
_Step 5: Show that_ 𝑥["] + 𝑦["] = 11. *)
proof
_(* Step 1: Use the identity 𝑥["]_ + 𝑦["] = 𝑥+ 𝑦 ["] −2𝑥𝑦 _to find 𝑥["]_ + 𝑦["] _in terms of 𝑥_ _and 𝑦. *)_
have c0: "x^2 + y^2 = (x+y)^2 - 2*x*y" sledgehammer
_(* Step 2: Substitute the given values of 𝑥_ _and 𝑦_ _to find 𝑥["]_ + 𝑦["] _. *)_
also have "... = (13)^2 - 2*24" using h0 h1 sledgehammer
_(* Step 3: Show that 𝑥["]_ + 𝑦["] = 121. *)
also have "... = 121" sledgehammer
finally have c1: "x^2 + y^2 = 121" sledgehammer
_(* Step 4: Take the square root of both sides to find_ 𝑥["] + 𝑦["]. *)
then show ?thesis sledgehammer
_(* Step 5: Show that_ 𝑥["] + 𝑦["] = 11. *)
qed
**Problem:**
Suppose that 𝑥 and 𝑦 are two real numbers such that #$%" = 7 and 𝑥𝑦= 19. Find 𝑥["] + 𝑦["].
**Proof:**
theorem mathd_algebra_332:
fixes x y ::real
assumes "(x + y) / 2 = 7"
and "sqrt (x * y) = sqrt 19"
shows "x^2 + y^2 = 158" **_Prompt_**
**_ChatGPT Output_**
_(* Step 1: Use the identity (𝑥+ 𝑦)["]= 𝑥["]_ + 2𝑥𝑦+ 𝑦["] _to find 𝑥["]_ + 𝑦["] _in terms of 𝑥+ 𝑦_ _and 𝑥𝑦._
_Step 2: Substitute the given values of 𝑥+ 𝑦_ _and 𝑥𝑦_ _to find 𝑥["]_ + 𝑦["].
_Step 3: Show that 𝑥["]_ + 𝑦["] = 158. *)
proof
_(* Step 1: Use the identity (𝑥+ 𝑦)["]= 𝑥["]_ + 2𝑥𝑦+ 𝑦["] _to find 𝑥["]_ + 𝑦["] _in terms of 𝑥+ 𝑦_ _and 𝑥𝑦. *)_
have c0: "x^2 + y^2 = (x+y)^2 - 2*x*y" sledgehammer
_(* Step 2: Substitute the given values of 𝑥+ 𝑦_ _and 𝑥𝑦_ _to find 𝑥["]_ + 𝑦["]. *)
also have "... = (2*7)^2 - 2*(sqrt 19)^2" using assms sledgehammer
_(* Step 3: Show that 𝑥["]_ + 𝑦["] = 158. *)
also have "... = 158" sledgehammer
finally show "x^2 + y^2 = 158" sledgehammer
qed
Figure 14: A case from miniF2F-test. Our method predicts viable subgoals and utilizes demonstration
examples involving squaring and square root operations to finalize the proof.
-----
As a mathematician familiar with Isabelle, your task is to provide a formal proof in response to a given problem statement.
Your proof should be structured and clearly written, meeting the following criteria:
- It can be verified by Isabelle.
- Each step of the proof should be explained in detail using comments enclosed in "(*" and "*)".
- The explanation for each step should be clear and concise, avoiding any unnecessary or apologetic language.
Please ensure that your proof is well-organized and easy to follow, with each step building upon the previous one.
**Problem:**
Given a real number $n$ such that 𝑛≠3 and ("#$)("&') [= 2][, show that ][𝑛= 11][.]
**Proof:**
theorem mathd_algebra_181:
fixes n :: real
assumes h0 : "n \<noteq> 3" and h1 : "(n+5) / (n-3) = 2"
shows "n=11"
_(* Step 1: Use the given equation ("#$)("&')_ [= 2][ to obtain an equation involving ][𝑛][.]
_Step 2: Simplify the equation to obtain an expression for 𝑛._
_Step 3: Show that the expression for 𝑛_ _is equal to 11. *)_
proof
_(* Step 1: Use the given equation ("#$)("&')_ [= 2][ to obtain an equation involving ][𝑛][. *)]
have "n+5 = 2 * (n-3)" using h0 h1 sledgehammer
_(* Step 2: Simplify the equation to obtain an expression for 𝑛. *)_
thus ?thesis sledgehammer
_(* Step 3: Show that the expression for 𝑛_ _is equal to 11. *)_
qed
**Problem:**
Prove by induction that the sum of the first 𝑛 odd numbers is 𝑛[(].
**Proof:**
theorem induction_sum_odd:
fixes n :: nat
assumes "n > 0"
shows "(\<Sum> (k::nat) = 0..(n-1). 2 * k + 1) = n^2"
_(* Step 1: Base case: Show that the sum of the first odd number is 1._
_Step 2: Inductive step: Assume that the sum of the first 𝑛_ _odd numbers is 𝑛[(]. Show that the sum of the first 𝑛+ 1 odd numbers is (𝑛+ 1)[(]._
_Step 3: Combine the two steps to complete the proof. *)_
using assms
proof (induct n)
_(* Step 1: Base case: Show that the sum of the first odd number is 1. *)_
case (Suc n)
have ?case when "n=0" using that sledgehammer
_(* Step 2: Inductive step: Assume that the sum of the first 𝑛_ _odd numbers is 𝑛[(]. Show that the sum of the first 𝑛+ 1 odd numbers is (𝑛+ 1)[(]. *)_
moreover have ?case when "n>0"
proof
have "(\<Sum>k = 0..(n-1). 2 * k + 1) = n^2" using Suc(1) that sledgehammer
then have "(\<Sum>k = 0..(n-1). 2 * k + 1) + (2 * n + 1) = n^2 + 2 * n + 1" sledgehammer
then show ?thesis sledgehammer
qed
_(* Step 3: Combine the two steps to complete the proof. *)_
ultimately show ?case sledgehammer
qed simp
**Problem:**
For any natural number 𝑛 greater than or equal to 9, there exists a natural number 𝑥 such that 𝑥[(] is equal to the quotient of the difference between the
factorial of 𝑛+ 2 and the factorial of 𝑛+ 1 divided by the factorial of 𝑛.
**Proof:**
theorem amc12b_2020_p6:
fixes n :: nat
assumes h0: "9 \<le> n"
shows "\<exists>x::nat. (real_of_nat x)^2 = (fact (n + 2)
- fact (n + 1)) / fact n"
**_Prompt_**
**_ChatGPT Output_**
_(* Step 1: Use the definition of factorial to simplify the expression._
_Step 2: Rearrange the expression to obtain a perfect square._
_Step 3: Show that the perfect square is a natural number. *)_
proof
_(* Step 1: Use the definition of factorial to simplify the expression. *)_
have "(fact (n + 2) - fact (n + 1)) / fact n = ((n+2)*(n+1)*fact n - (n+1)*fact n) / fact n" using fact_reduce[of "n+1"] sledgehammer
_(* Step 2: Rearrange the expression to obtain a perfect square. *)_
also have "... = (n+1) * (n+2) - (n+1)" sledgehammer
also have "... = (n+1)^2" sledgehammer
_(* Step 3: Show that the perfect square is a natural number. *)_
finally show ?thesis using assms sledgehammer
qed
Figure 15: A case from miniF2F-test. Our method recognizes viable subgoals and successfully
finishes the proof.
-----
| [
"Xueliang, Zhao",
"Wenda, Li",
"Lingpeng, Kong"
] | 2023-05-25T00:00:00 | ICML 2024 | true | 1 | 0 | [
"Isabelle"
] | http://arxiv.org/abs/2305.16366 | https://arxiv.org/abs/2305.16366 | https://www.semanticscholar.org/paper/58652a527799c3ee1f0c0097dba9057e397c2800 |
SymBa: Symbolic Backward Chaining for Structured Natural Language Reasoning | While Large Language Models (LLMs) have demonstrated remarkable reasoning ability, providing a structured, explainable proof to ensure explainability, i.e. structured reasoning, still remains challenging. Among two directions of structured reasoning, we specifically focus on backward chaining, where the query is recursively decomposed to subgoals by applying inference rules. We point out that current popular backward chaining implementations (Least-to-most prompting and LAMBADA) fail to implement the necessary features of backward chaining, such as arbitrary-depth recursion and binding propagation. To this end, we propose a novel backward chaining framework, SymBa (Symbolic Backward Chaining). In SymBA, a symbolic solver controls the whole proof process, and an LLM searches for the relevant natural language premises and translates them into a symbolic form for the solver. By this LLM-solver integration, while producing a completely structured proof that is symbolically verified, SymBa achieves significant improvement in performance, proof accuracy, and efficiency in diverse structured reasoning benchmarks compared to baselines. | This work proposes a novel backward chaining framework, SymBa (Symbolic Backward Chaining), in which a symbolic solver controls the whole proof process, and an LLM searches for the relevant natural language premises and translates them into a symbolic form for the solver. | ## SymBa: Symbolic Backward Chaining for Structured Natural Language Reasoning
**Jinu Lee[1][,][2]** and Wonseok Hwang[1][,][3]
1 2 3
LBox University of Illinois Urbana-Champaign University of Seoul
{jinulee.v, wonseok.hwang}@lbox.kr
**Abstract**
While Large Language Models (LLMs) have
demonstrated remarkable reasoning ability, providing a structured, explainable proof to ensure
explainability, i.e. structured reasoning, still
remains challenging. Among two directions of
structured reasoning, we specifically focus on
backward chaining, where the query is recursively decomposed to subgoals by applying inference rules. We point out that current popular
backward chaining implementations (Least-tomost prompting and LAMBADA) fail to implement the necessary features of backward
chaining, such as arbitrary-depth recursion and
binding propagation. To this end, we propose a novel backward chaining framework,
SymBa (Symbolic Backward Chaining). In
SymBa, a symbolic solver controls the whole
proof process, and an LLM searches for the
relevant natural language premises and translates them into a symbolic form for the solver.
By this LLM-solver integration, while producing a completely structured proof that is symbolically verified, SymBa achieves significant
improvement in performance, proof accuracy,
and efficiency in diverse structured reasoning
benchmarks compared to baselines.
In general, strategies for reasoning can be typically divided into two categories, forward chain_ing and backward chaining (Poole and Mackworth,_
2010). Forward chaining reasoners first collect the
base facts and repeatedly derive a new fact using
logical rules until it finally proves the user’s query.
In contrast, backward chaining reasoners start from
the query and apply rules that decompose the query
into a set of subgoals. These subgoals are recursively decomposed until they can be directly proved
or refuted using the base facts.
In terms of structured reasoning, forward chaining methods require a tailored planner module
that selects the most likely next reasoning step
to prevent proof divergence (Sprague et al., 2023;
Creswell et al., 2023; Yang et al., 2022). Consequently, these approaches suffer from severe performance drop at longer reasoning paths due to
planning failure (Kazemi et al., 2023). In contrast,
backward chaining methods are guaranteed to terminate, which removes the necessity for a planner.
However, we claim that current LLM-based
backward chaining implementations do not fully
implement the backward chaining algorithm, by
omitting features like arbitrary-depth recursion
and binding propagation (Section 3.1). These features, necessary for performing sound and accurate
backward chaining in diverse settings, are welldefined and can be effectively handled with symbolic solvers.
To this end, we propose a novel framework,
**SymBa (Symbolic Backward Chaining), a mod-**
ular backward chaining approach that integrates
a symbolic solver with an LLM. In SymBa, the
solver controls the entire reasoning process, and
the LLM is instructed to generate a single reasoning
step only when the solver fails to prove a subgoal.
By interleaving the natural language sentences and
corresponding symbolic representations, SymBa
can leverage the natural language reasoning abilities of LLMs and the logical soundness provided
**1** **Introduction**
Recently, large language models (LLMs) trained
with massive amounts of natural language text have
shown remarkable reasoning ability (Wei et al.,
2022; Kojima et al., 2022, inter alia.). However,
LLMs might generate inaccurate and ungrounded
reasoning paths as the number of reasoning steps increases (Saparov and He, 2023). To simultaneously
enhance the accuracy and explainability of generated proofs against complex problems, structured
_reasoning, where the model provides an explicit,_
well-structured reasoning path instead of rationales
in free-form text, has been frequently explored as
a solution (Creswell et al., 2023; Kazemi et al.,
2023).
-----
Figure 1: Brief comparison between natural language-based structured backward chaining methods and SymBa.
by the symbolic solver.
We directly compare the proposed method with
LLM-based backward chaining baselines, Leastto-most prompting (Zhou et al., 2023) and LAMBADA (Kazemi et al., 2023), in seven diverse
benchmarks that span over deductive, relational,
and arithmetic reasoning. SymBa outperforms previous methods in terms of task performance, proof
accuracy, and efficiency, while being able to provide a strictly structured proof in both symbolic
and natural language forms[1].
**2** **Background**
**2.1** **Logic programming**
Logic programming is a programming paradigm
based on formal logic. Generally, each statement
of a logic program is expressed as a rule, which
describes an implication relation between terms
that have boolean truth values.
_h :- p1, ..., pn, not q1, ..., not qm._ (1)
This rule denotes that when every subgoal terms pi
and not qj are true, the head term h is also proven
true. A rule with an empty body, a fact, expresses
that the head term h is unconditionally true.
For instance, consider the logic program
in Equation 2. The terms dad(alan, carl)
and dad(carl, bill) are true by the corresponding facts. When we substitute variables of Rule1 (i.e. _bind) using the binding_
{A/alan, B/bill, C/carl}, all subgoals become
identical to already proved terms, so the respective
bound head granddad(alan, bill) is also deduced
1We publicly disclose our implementation of baselines
and SymBa, test data, prompts, and anything necessary to
[reproduce this study in the following repository.](https://osf.io/g9h42/?view_only=74ab8cc288404502bd2d820820ad9426)
as true.
Rule1. granddad(A, B) :- dad(A, C), dad(C, B).
Fact1. dad(alan, carl) :-. (2)
Fact2. dad(carl, bill) :-.
**2.2** **Backward chaining solver**
Backward chaining solvers (top-down solvers) are
logic program interpreters that start from the query
term and recursively apply rules until the proof is
complete. When a user provides a query term, the
solver searches through the database for symbolic
rules and facts that might prove the query. A rule
or a fact can prove the query only if there exists a
binding that can make the query and the head identical, i.e. the query and the head unify. If a rule that
unifies with the query is found, the solver recursively proves each subgoal. When all subgoals are
successfully proven true, the query is also proved.
Consider the logic program in Equation 2. If the
query is given as granddad(alan, bill), the only
statement that has a unifying head is Rule1. To
make the rule head and query identical, we apply
the binding {A/alan, B/bill} to Rule1, obtaining two subgoals dad(alan, C) and dad(C, bill).
The first subgoal can be proved by binding C/carl.
Subsequently, the binding is dynamically propagated to the following subgoals (i.e. binding prop_agation), in this case updating the second subgoal_
to dad(carl, bill). As this is also true, it can be
concluded that the original query is proven.
**3** **Methods**
**3.1** **Baselines**
We select two popular natural language-based backward chaining methods as our baseline, namely
**Least-to-most prompting (Zhou et al., 2023) and**
**LAMBADA (Kazemi et al., 2023).**
-----
**Least-to-most prompting is a two-stage task**
decomposition method. In the initial Decompose
stage, the LLM is instructed to decompose the
given question into sub-questions and order them
from least complicated to most. The questions are
passed to the Solution stage, where each question
is answered in an incremental order. This process
can be seen as explicitly planning the proof’s structure first and executing the plan during the actual
reasoning later.
While having more structure in its proof compared to Chain-of-thought reasoning, as Least-tomost prompting performs decomposition only once,
it is required to predict the total ordering of subquestions in a single run, which is challenging especially when there exist multiple potential reasoning
paths (Patel et al., 2022). We further examine the
proof accuracy problem of Least-to-most prompting in Section 5.2.
**LAMBADA implements a modular backward**
chaining approach that operates on pure natural
language. When given a query, it tests all facts
and rules against the query to find out which might
apply[2]. If a matching fact is retrieved, it stops
recursion. If any rules are retrieved, they are then
bound and decomposed into subgoals. Finally, it is
ensured that the rule and the query have the same
negation status.
While LAMBADA overcomes the limitation of
Least-to-most prompting by allowing an arbitrary
decomposition depth, LAMBADA’s capability is
severely limited due to the lack of binding propagation. As binding propagation is necessary for
operations like coreferencing between subgoals (illustrated in Equation 2) or returning a value, LAMBADA is inherently incapable of various types of
reasoning including relational reasoning with bridging entities (Sinha et al., 2019; Yang et al., 2018)
and arithmetic reasoning (Cobbe et al., 2021). Besides the binding propagation problem, we find
LAMBADA to be highly inefficient compared to
other methods (Section 5.3).
**3.2** **Proposed method**
**3.2.1** **Symbolic Backward Chaining**
To overcome the limitations of previously proposed
methods, we propose SymBa (Symbolic Backward
Chaining), which integrates a symbolic backward
2While the original paper requires classification of each
sentence as either fact or rule before the actual reasoning, we
do not follow their implementation to ensure a fair comparison.
chaining solver and an LLM for natural language
reasoning.
The workflow of SymBa is briefly illustrated in
Figure 2. A symbolic solver is capable of deducing
a query if the solver’s database includes every necessary statement. However, when the relevant context is only given in natural language, the database
is initially empty, automatically failing to prove the
query. To make progress, the solver calls the LLM
to check if the failed query can be entailed from the
natural language context. The LLM then generates
a statement that unifies with the subgoal, and the
solver retries proving the failed subgoal with the
updated database. The process is continued until
the original query is proved, or every possible reasoning path fails. Appendix A includes a formal,
detailed description of SymBa’s mechanism.
Delegating proof control to a symbolic solver
has numerous benefits. Most importantly, symbolic solvers algorithmically produce sound and
formally verified proofs. We compare the proof
accuracy to baselines in Section 5.2. Furthermore,
SymBa can handle tasks like relational reasoning
and mathematical reasoning that LAMBADA fails
to address by leveraging the solver’s in-built binding propagation. Finally, solver operations are computationally efficient compared to neural network
inferences. By performing operations like goal
decomposition and binding propagation with symbols, SymBa is significantly efficient compared to
natural language-based backward chaining methods (Section 5.3).
**3.2.2** **Single-step statement generation**
In SymBa, the LLM is instructed to generate a logic
program statement from the context that might
prove the current subgoal. Similar to previous
works on structured reasoning that adopt modular strategy (Creswell et al., 2023; Kazemi et al.,
2023), we divide the single-step statement generation process into five modules: Fact/Rule Search,
Fact/Rule Translation, and Symbolic Validation
(Figure 3).
**Fact/Rule Search In the first stage, the LLM is**
prompted with the symbolic query and the context,
and is instructed to generate a description of a reasoning step that might prove the query in natural
language.
**Fact/Rule Translation Subsequently, the LLM**
is given the query and the description of the backward chaining step (obtained from the Search module) and generates a symbolic statement. Complet
-----
Figure 2: Overview of SymBa. The proof process is mainly controlled by a symbolic backward chaining solver
(gray). When a goal is not provable by the solver alone, an LLM (navy) is called and generates a single reasoning
step which is added to the symbolic solver’s database.
ing both the Search and the Translation step yields
the symbolic representation of the logical rule/fact
that proves the given query term.
**Symbolic validation We verify the generated**
logic program statement by checking if the statement is syntactically correct, and if the head of the
statement unifies to the given query. Note that this
step is purely symbolic and does not require any
LLM inference.
**4** **Experimental settings**
**4.1** **Benchmarks**
**Deductive reasoning We make use of four rep-**
resentative benchmarks for deductive reasoning,
namely the ProofWriter family (ProofWriter, BirdsElectricity, ParaRules) (Tafjord et al., 2021; Clark
et al., 2020) and PrOntoQA (Saparov and He,
2023). Each instance is formulated as a binary classification task, deciding whether the given query
can be proved according to the given rules and facts.
For ProofWriter, we leverage the most challenging
subset that contains problems with reasoning depth
up to 5. For PrOntoQA, we sample examples with
fictional entities (hardest) and reasoning depth 4.
**Relational reasoning CLUTRR (Sinha et al.,**
2019) is a relational reasoning benchmark based
on human-written stories about family relations.
For our experiments, we reformulate the task into
true-or-false form, where two entities and a relation
are presented and one should predict if the given
relation is true or false. We sample from the hardest
subset where there are up to 9 bridging entities.
**Arithmetic reasoning To evaluate arithmetic**
reasoning performance, we leverage two benchmarks, namely MAWPS (Koncel-Kedziorski et al.,
2016) and GSM8k (Cobbe et al., 2021). The goal
of these two tasks is to predict the numeric answer
to a given question. MAWPS includes synthetic
arithmetic problems that can be solved within 1-3
elementary operations. In contrast, GSM8k contains human-written questions with diverse vocabulary and complex solutions.
More information regarding data statistics, fewshot example construction, logic program representation, and evaluation of each benchmark can be
found in Appendix B.
**4.2** **LLM and Few-shot examples**
To reproduce baselines and implement SymBa, we
use three open- and closed-sourced state-of-the-art
LLMs: GPT-4 Turbo, Claude 3 Sonnet, and LLaMa
3 70B Instruct. A brief comparison of these models
is shown in Table 1.
Model Provider Open? Release date
GPT-4 Turbo OpenAI N 11/04/2023
Claude-3 Sonnet Anthropic N 02/29/2024
LLaMa 3 70B Meta Y 04/18/2024
Table 1: Brief information of LLMs applied in this study.
_Release date column refers to the version of the specific_
checkpoints or API endpoints used for the experiments.
We sample few-shot demonstrations from each
training split and manually reformat them as defined by each baseline (Appendix B). For SymBa,
-----
Figure 4: Examples of Positive/Negative demonstrations included in the prompts for the Search/Translation
module of SymBa.
As the benchmarks incorporate multiple plausible reasoning paths with significant depth, the limited planning ability of Least-to-most prompting
hinders performance in large-depth benchmarks,
such as ProofWriter, ParaRules, CLUTRR, and
GSM8k. While it achieves task performance comparable to SymBa in some settings, we further show
that the proof might not be accurate and faithful
due to the propagation of Decomposition errors
(Section 6.1).
The accuracy LAMBADA achieves in deductive
reasoning is also lower than SymBa. As LAMBADA implements a fully recursive proof generation process, the task performance is less affected by the accuracy of the speculative planning. However, the large performance gap in
ParaRules, where the model must extract the underlying reasoning statement despite the syntactic distortion, demonstrates the effectiveness of intermediate symbolic representations that capture
the intended logical meaning. Furthermore, as
previously mentioned, LAMBADA cannot reason
through relational and arithmetic reasoning benchmarks (CLUTRR, MAWPS, and GSM8k) due to
the missing backward propagation.
We present complete results including standard
deviations in Appendix C.
**5.2** **Proof accuracy**
One of the key benefits of structured reasoning is
that it generates more inspectable outputs (Ribeiro
et al., 2023). In this section, we analyze the proof
accuracy of three backward chaining methods and
Chain-of-Thought prompting in four benchmarks.
Following Kazemi et al. (2023), the first 30 correct proofs for positive (non-negated) queries are
sampled and examined if they include any false
intermediate statements or exclude necessary reasoning steps.
Figure 3: Brief illustration of the modules in SymBa’s
single statement generation procedure. When the solver
fails to prove a term (as illustrated in Figure 2), the
single-step statement generation procedure is initiated.
Search modules retrieve plausible reasoning steps from
the context, which is translated to symbolic form by
Translation modules. Statements that passed Symbolic
Validation module are added to the solver’s database.
we combine the Positive and Negative examples to
reduce hallucination in the Search/Translation modules (Figure 4); effects of these Negative examples
are presented in Section 6.2.
**4.3** **Solver**
To implement the algorithm described in Section
2.2, we develop a custom backward chaining solver
in Python that is able to process logic programs
with arithmetic operations. We formally define the
solver’s algorithm in Appendix A.
**5** **Results**
**5.1** **Task performance**
The main results are presented in Table 2.
Among the three backward chaining methods compared (Least-to-most prompting, LAMBADA, and
SymBa), SymBa demonstrates strong performance
robust to the type of reasoning (deductive, relational, and arithmetic) and the base language
model.
-----
|Model|Method|Deductive|Col4|Col5|Col6|Relational|Arithmetic|Col9|
|---|---|---|---|---|---|---|---|---|
|||ProofWriter|BirdsElec|ParaRules|PrOntoQA|CLUTRR|MAWPS|GSM8k|
|GPT-4|Least-to-most LAMBADA SymBa|71.5 69.7 79.8|88.2 83.4 94.4|71.8 59.7 79.2|87.5 96.0 96.3|81.5 X 84.3|84.3 X 86.7|60.6 X 63.8|
|Claude-3|Least-to-most LAMBADA SymBa|60.3 69.3 77.6|75.7 62.7 77.3|54.0 57.7 69.0|86.0 67.0 91.0|77.0 X 85.0|94.2 X 94.1|59.3 X 67.4|
|LLaMa-3|Least-to-most LAMBADA SymBa|61.4 64.0 70.4|71.0 82.3 92.9|66.7 62.1 71.7|95.0 90.8 93.3|72.0 X 90.5|89.0 X 87.9|61.5 X 67.0|
Table 2: Average accuracy (%) on four runs per each benchmark, LLM model, and reasoning method. Boldface
indicates that the score is significantly higher than others (confidence 95%). LAMBADA is incapable of handling
relational and arithmetic benchmarks.
Figure 5: Proof accuracy on four reasoning benchmarks.
In the first 30 examples that each method got correct,
SymBa and LAMBADA achieved the highest proof
accuracy, while Least-to-most achieved the lowest.
Results are presented in Figure 5. It is shown that
two modular methods (LAMBADA and SymBa)
generate the most accurate proofs, where Leastto-most prompting demonstrates significantly degraded proof accuracy. Such behavior can be attributed to shortcuts, where it has failed to predict
the decomposition order but reached the correct
conclusion. Figure 6 illustrates the case where
Least-to-most produces incorrect reasoning paths.
In summary, we show that the modular approach
can significantly contribute to the proof accuracy
as previously claimed in Creswell et al. (2023) and
Kazemi et al. (2023).
**5.3** **Efficiency**
To compare the efficiency, we report the token usage, API cost, and execution time for completing
300 examples in ProofWriter following Kazemi
et al. (2023).
The results are presented in Table 3. SymBa
achieves 9x token/cost efficiency and 22x speed
compared to LAMBADA. While LAMBADA uses
Figure 6: Example of shortcuts by Least-to-most
prompting, sampled from CLUTRR. Even though the
proof planning is completely inaccurate.
|202,420|8.02|
|---|---|
Tokens Cost($) Time(h)
CoT 202,420 8.02 0.62
Least-to-most 1,485,989 47.14 1.18
LAMBADA 6,625,623 221.72 23.96
SymBa **880,106** **27.22** **1.15**
Table 3: Token/cost/time consumption (lower the better)
for 300 examples in ProofWriter benchmark in GPT-4
Turbo. Regarding the cost, the OpenAI API used in this
study charges $0.03 per 1,000 input tokens and $0.05
per 1,000 output tokens.
an LLM to perform unification checks and subgoal
decomposition, these processes are delegated to
the symbolic solver in SymBa, which results in
significantly reduced LLM inference costs.
Despite that SymBa requires multiple LLM inferences per each reasoning step, SymBa is even
more efficient than Least-to-most prompting, a nonmodular approach. While Least-to-most prompting
can be optimized by dynamically appending the
questions to intermediate sequences during the inference, currently available commercial LLM APIs
-----
do not support such functionality.
**6** **Analysis**
**6.1** **Error analysis**
We manually classify the errors observed from
SymBa into three categories: Search-Hallucination,
Search-Miss, and Translation. Definitions of the
error types are shown in Table 4.
Error Type Definition
Search-Hallucination The generated description is not
in the context, or unrelated to the
query.
Search-Miss A relevant description stated in
the context was not retrieved.
Translation Symbolic statement is unfaithfully translated from the description (i.e. syntax error, misleading
symbol names).
Table 4: Description of three error classes observed
from SymBa. If multiple errors occur simultaneously in
one example, we select the error that appears first.
Figure 7: Error analysis results for SymBa. We sampled
30 proofs that resulted in wrong answers and manually
classified them according to Table 4.
As presented in Figure 7, the distribution of errors highly varies along the datasets. It implies that
each benchmark poses unique challenges depending on numerous factors, such as reasoning type
and lexical diversity.
Among the benchmarks, we focus on
ProofWriter and Birds-Electricity, which are
both deductive reasoning benchmarks yet display
completely different error distributions. While
rules in ProofWriter often contain variables (e.g.
’If someone is red then they are round’), 99.6%
of the rules from Birds-Electricity are bound (e.g.
’If wire is metal then wire conducts electricity’).
From this observation, we hypothesize that the
higher ratio of unbound rules leads to elevated
Search-miss errors.
Figure 8: Recall of the Rule Search module in bound
and unbound ProofWriter rules.
We compare the recall of the Rule Search module in isolation, based on whether the target rule is
bound or not (Figure 8). Rule Search achieves a
recall of approximately 51% when the target rule is
not bound, which is significantly lower than that of
bound rules (∼92%). It proves that the boundness
of the provided rules seriously affects Search-Miss
errors, possibly due to the low lexical overlap of
unbound rules compared to bound rules (Shinoda
et al., 2021; Liu et al., 2020).
**6.2** **Ablation study**
As an ablation study, we selectively manipulate the
modules or in-context demonstrations and examine
the performance of four tasks.
**Modules To analyze the contribution of each**
module, we selectively remove some and compare
the performance. In the -Search setting, we remove Fact/Rule Search by merging it to Fact/Rule
Translation, so that the symbolic statement is directly generated from the context and the query
without intermediate textual representations. In the
-Unify setting, we disable the Symbolic Validation
module by not checking if the generated statement
unifies to the query.
**Negative in-context examples We also test the**
effects of the Negative in-context examples illustrated in Figure 4. In the -SearchNeg setting, we
remove Negative examples from the Search module, while in -TransNeg we remove Negative examples from the Translation module.
|PW|BE|CLUTRR|
|---|---|---|
|79.8|94.4|84.3|
|-22.7 -6.9|-5.2 -1.6|+2.4 -8.7|
PW BE CLUTRR GSM8k
SymBa 79.8 94.4 84.3 63.8
-Search -22.7 -5.2 +2.4 +3.0
-Unify -6.9 -1.6 -8.7 -0.1
-SearchNeg -8.8 -29.8 +2.7 +4.1
-TransNeg -2.4 -12.0 -13.8 +1.5
Table 5: Ablation results on four benchmarks using
GPT-4 Turbo. All ablation results are 4-run.
As presented in Table 5, the effects of each setting highly vary along the datasets. In ProofWriter
variants, the performance significantly drops for all
-----
settings. It is notable that in CLUTRR and GSM8k,
some ablation settings achieve similar or even better performance compared to the original setting.
However, we observe common issues related to
the proof accuracy in these settings. In GSM8k,
the model often directly outputs the answer instead of providing structured explanations, while
in CLUTRR the model makes extreme SearchHallucination and Translation errors (Figure 9). To
summarize, the modular approach and negative incontext examples are both necessary for SymBa’s
robustness and accuracy in multi-step reasoning.
Figure 9: Examples of erroneous logic program statements, sampled from -SearchNeg in GSM8k and
-Search in CLUTRR. Ablated versions often fail to
produce a faithful reasoning path where SymBa generates a correct proof (denoted as Gold).
**7** **Related works**
**7.1** **Backward chaining**
Backward chaining has not much been explored in
the era of LLM and in-context learning compared
to forward chaining. At the time of writing, the
only work that explicitly claims to be an LLMbased backward chaining method is LAMBADA.
Alternatively, some backward chaining methods
use relatively small models directly fine-tuned with
in-domain data (Tafjord et al., 2022; Bostrom et al.,
2022). These methods train individual modules for
rule generation and verification, achieving strong
results but on behalf of the costly construction of
in-domain data for training.
Furthermore, as previously described in Section
3.1, approaches based on task decomposition (Zhou
et al., 2023; Khot et al., 2023; Radhakrishnan et al.,
2023) can be viewed as a type of backward chaining (Huang and Chang, 2023). Nonetheless, these
methods tend to demonstrate relatively low proof
accuracy due to planning failure (Radhakrishnan
et al., 2023, Section 5.2 of this work), while SymBa
is capable of providing a fully structured proof with
high accuracy.
**7.2** **LLM and Logic programming**
Integrating logic programming and LLMs for multistep reasoning is a recently emerging topic (Pan
et al., 2023; Yang et al., 2023; Olausson et al., 2023,
_inter alia.), triggered by the improvement in rea-_
soning and code generation ability of LLMs. The
majority of these works implement a similar twostage approach: (1) convert the problem formulated
in natural language into a logic program, and (2)
run an external solver to prove the query.
SymBa differs from these methods as the solver
is integrated into the loop instead of operating in
separate stages. It is reported that these methods
often choose incompatible representations for the
same concept or fail to discover information that
does not surface in the premises (Olausson et al.,
2023), as they generate the code without any hierarchical cues about how statements are structured.
These issues can be potentially mitigated by the
backward chaining of SymBa, as it ensures that all
subgoals are addressed at least once and that the
generated statement unifies with the query.
**8** **Conclusion**
We introduce SymBa, a novel backward chaining
method for diverse structured reasoning. While
current backward chaining implementations based
on LLMs either overly limit the recursion depth or
cannot perform relational and arithmetic reasoning,
our method integrates a symbolic solver with LLM
that removes both limitations.
By the solver-LLM integration, we achieve high
performance in various tasks compared to backward chaining baselines. Furthermore, SymBa provides a structured proof in both symbols and natural
language with high accuracy and efficiency.
From both theoretical and empirical perspectives,
we believe that SymBa significantly extends the
horizon of LLM-based backward chaining.
-----
**9** **Limitations**
While SymBa significantly improves the performance and efficiency of LLM-based backward
chaining, it still holds limitations inherited from
LLMs, backward chaining, and symbolic reasoning.
To begin with, LLMs often produce counterfactual and inconsistent information, and can potentially cause risk when used in domains where high
precision and factuality are required. While SymBa
reduces errors by leveraging the symbolic solver
and applying a modular approach, the single-step
statement generation based on LLM is still subjective to producing false reasoning steps that might
lead to the wrong conclusion.
Furthermore, even though backward chaining is
inherently free from infinite recursion, a naively
implemented backward chaining system might still
require substantial computation in fact-intensive
tasks such as knowledge base question answering (KBQA) (Yih et al., 2016; Gu et al., 2021).
This might be mitigated by hybrid forward and
backward chaining (Hong et al., 2022) or by using sophisticated planning algorithms for symbolic
solvers (Lu et al., 2012; Yang et al., 2023). We
leave this direction as future work.
Lastly, some reasoning problems may not be able
to be formulated in logic programming notations
as in this study. Most notably, solving high-order
logic problems generally requires meta-predicates
that reason over the database, such as call/N in
Prolog (Chen et al., 1993), which cannot be handled using the current algorithm of SymBa. Besides high-order logic, some reasoning tasks (e.g.
Dalvi et al., 2021; Zellers et al., 2019) require reasoning with complex linguistic expressions and
highly pragmatic assumptions, which might not be
effectively expressed using logic programming.
**References**
[K.R. Apt and K. Doets. 1992. A New Definition of](https://books.google.co.kr/books?id=FSs6vwEACAAJ)
_[SLDNF-resolution. Amsterdam ILLC CT. Institute](https://books.google.co.kr/books?id=FSs6vwEACAAJ)_
for Logic, Language and Computation.
Kaj Bostrom, Zayne Sprague, Swarat Chaudhuri, and
Greg Durrett. 2022. [Natural language deduction](https://doi.org/10.18653/v1/2022.findings-emnlp.358)
[through search over statement compositions. In Find-](https://doi.org/10.18653/v1/2022.findings-emnlp.358)
_ings of the Association for Computational Linguistics:_
_EMNLP 2022, pages 4871–4883, Abu Dhabi, United_
Arab Emirates. Association for Computational Linguistics.
Weidong Chen, Michael Kifer, and David Scott Warren.
[1993. HILOG: A foundation for higher-order logic](https://doi.org/10.1016/0743-1066(93)90039-J)
[programming. J. Log. Program., 15(3):187–230.](https://doi.org/10.1016/0743-1066(93)90039-J)
Peter Clark, Oyvind Tafjord, and Kyle Richardson. 2020.
[Transformers as soft reasoners over language. In Pro-](https://doi.org/10.24963/ijcai.2020/537)
_ceedings of the Twenty-Ninth International Joint Con-_
_ference on Artificial Intelligence, IJCAI-20, pages_
3882–3890. International Joint Conferences on Artificial Intelligence Organization. Main track.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
Antonia Creswell, Murray Shanahan, and Irina Higgins.
[2023. Selection-inference: Exploiting large language](https://openreview.net/pdf?id=3Pf3Wg6o-A4)
[models for interpretable logical reasoning. In The](https://openreview.net/pdf?id=3Pf3Wg6o-A4)
_Eleventh International Conference on Learning Rep-_
_resentations, ICLR 2023, Kigali, Rwanda, May 1-5,_
_2023._
Bhavana Dalvi, Peter Jansen, Oyvind Tafjord, Zhengnan
Xie, Hannah Smith, Leighanna Pipatanangkura, and
[Peter Clark. 2021. Explaining answers with entail-](https://doi.org/10.18653/v1/2021.emnlp-main.585)
[ment trees. In Proceedings of the 2021 Conference](https://doi.org/10.18653/v1/2021.emnlp-main.585)
_on Empirical Methods in Natural Language Process-_
_ing, pages 7358–7370, Online and Punta Cana, Do-_
minican Republic. Association for Computational
Linguistics.
Yu Gu, Sue Kase, Michelle Vanni, Brian Sadler, Percy
Liang, Xifeng Yan, and Yu Su. 2021. Beyond iid:
three levels of generalization for question answering
on knowledge bases. In Proceedings of the Web
_Conference 2021, pages 3477–3488. ACM._
Ruixin Hong, Hongming Zhang, Xintong Yu, and
Changshui Zhang. 2022. [METGEN: A module-](https://doi.org/10.18653/v1/2022.findings-naacl.145)
[based entailment tree generation framework for an-](https://doi.org/10.18653/v1/2022.findings-naacl.145)
[swer explanation. In Findings of the Association](https://doi.org/10.18653/v1/2022.findings-naacl.145)
_for Computational Linguistics: NAACL 2022, pages_
1887–1905, Seattle, United States. Association for
Computational Linguistics.
[Jie Huang and Kevin Chen-Chuan Chang. 2023. To-](https://doi.org/10.18653/v1/2023.findings-acl.67)
[wards reasoning in large language models: A survey.](https://doi.org/10.18653/v1/2023.findings-acl.67)
In Findings of the Association for Computational
_Linguistics: ACL 2023, pages 1049–1065, Toronto,_
Canada. Association for Computational Linguistics.
Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin
[Xu, and Deepak Ramachandran. 2023. LAMBADA:](https://doi.org/10.18653/v1/2023.acl-long.361)
[Backward chaining for automated reasoning in nat-](https://doi.org/10.18653/v1/2023.acl-long.361)
[ural language. In Proceedings of the 61st Annual](https://doi.org/10.18653/v1/2023.acl-long.361)
_Meeting of the Association for Computational Lin-_
_guistics (Volume 1: Long Papers), pages 6547–6568,_
Toronto, Canada. Association for Computational Linguistics.
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao
Fu, Kyle Richardson, Peter Clark, and Ashish Sab[harwal. 2023. Decomposed prompting: A modular](https://openreview.net/forum?id=_nGgzQjzaRy)
-----
[approach for solving complex tasks. In The Eleventh](https://openreview.net/forum?id=_nGgzQjzaRy)
_International Conference on Learning Representa-_
_tions._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html)
[guage models are zero-shot reasoners. In Advances](http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html)
_in Neural Information Processing Systems 35: An-_
_nual Conference on Neural Information Processing_
_Systems 2022, NeurIPS 2022, New Orleans, LA, USA,_
_November 28 - December 9, 2022._
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
[Kushman, and Hannaneh Hajishirzi. 2016. MAWPS:](https://doi.org/10.18653/v1/N16-1136)
[A math word problem repository. In Proceedings of](https://doi.org/10.18653/v1/N16-1136)
_the 2016 Conference of the North American Chapter_
_of the Association for Computational Linguistics: Hu-_
_man Language Technologies, pages 1152–1157, San_
Diego, California. Association for Computational
Linguistics.
Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang,
[Yile Wang, and Yue Zhang. 2020. Logiqa: A chal-](https://doi.org/10.24963/ijcai.2020/501)
[lenge dataset for machine reading comprehension](https://doi.org/10.24963/ijcai.2020/501)
[with logical reasoning. In Proceedings of the Twenty-](https://doi.org/10.24963/ijcai.2020/501)
_Ninth International Joint Conference on Artificial_
_Intelligence, IJCAI-20, pages 3622–3628. Interna-_
tional Joint Conferences on Artificial Intelligence
Organization. Main track.
[Benjie Lu, Zhiqing Liu, and Hui Gao. 2012. An adap-](https://doi.org/10.1109/CCIS.2012.6664359)
[tive prolog programming language with machine](https://doi.org/10.1109/CCIS.2012.6664359)
[learning. In 2nd IEEE International Conference on](https://doi.org/10.1109/CCIS.2012.6664359)
_Cloud Computing and Intelligence Systems, CCIS_
_2012, Hangzhou, China, October 30 - November 1,_
_2012, pages 21–24. IEEE._
Kyle Marple, Elmer Salazar, and Gopal Gupta. 2017.
[Computing stable models of normal logic programs](http://arxiv.org/abs/1709.00501)
[without grounding. CoRR, abs/1709.00501.](http://arxiv.org/abs/1709.00501)
Theo Olausson, Alex Gu, Ben Lipkin, Cedegao Zhang,
Armando Solar-Lezama, Joshua Tenenbaum, and
[Roger Levy. 2023. LINC: A neurosymbolic approach](https://doi.org/10.18653/v1/2023.emnlp-main.313)
[for logical reasoning by combining language models](https://doi.org/10.18653/v1/2023.emnlp-main.313)
[with first-order logic provers. In Proceedings of the](https://doi.org/10.18653/v1/2023.emnlp-main.313)
_2023 Conference on Empirical Methods in Natural_
_Language Processing, pages 5153–5176, Singapore._
Association for Computational Linguistics.
Liangming Pan, Alon Albalak, Xinyi Wang, and
[William Wang. 2023. Logic-LM: Empowering large](https://doi.org/10.18653/v1/2023.findings-emnlp.248)
[language models with symbolic solvers for faithful](https://doi.org/10.18653/v1/2023.findings-emnlp.248)
[logical reasoning. In Findings of the Association](https://doi.org/10.18653/v1/2023.findings-emnlp.248)
_for Computational Linguistics: EMNLP 2023, pages_
3806–3824, Singapore. Association for Computational Linguistics.
Pruthvi Patel, Swaroop Mishra, Mihir Parmar, and
[Chitta Baral. 2022. Is a question decomposition unit](https://doi.org/10.18653/v1/2022.emnlp-main.302)
[all we need?](https://doi.org/10.18653/v1/2022.emnlp-main.302) In Proceedings of the 2022 Confer_ence on Empirical Methods in Natural Language_
_Processing, pages 4553–4569, Abu Dhabi, United_
Arab Emirates. Association for Computational Linguistics.
[David Poole and Alan K. Mackworth. 2010. Artificial](http://www.cambridge.org/uk/catalogue/catalogue.asp?isbn=9780521519007)
_[Intelligence - Foundations of Computational Agents.](http://www.cambridge.org/uk/catalogue/catalogue.asp?isbn=9780521519007)_
Cambridge University Press.
Ansh Radhakrishnan, Karina Nguyen, Anna Chen,
Carol Chen, Carson Denison, Danny Hernandez,
Esin Durmus, Evan Hubinger, Jackson Kernion,
Kamile Lukoši˙ ut¯ e, Newton Cheng, Nicholas Joseph,˙
Nicholas Schiefer, Oliver Rausch, Sam McCandlish,
Sheer El Showk, Tamera Lanham, Tim Maxwell,
Venkatesa Chandrasekaran, Zac Hatfield-Dodds,
Jared Kaplan, Jan Brauner, Samuel R. Bowman, and
Ethan Perez. 2023. [Question decomposition im-](http://arxiv.org/abs/2307.11768)
[proves the faithfulness of model-generated reasoning.](http://arxiv.org/abs/2307.11768)
Danilo Neves Ribeiro, Shen Wang, Xiaofei Ma,
Henghui Zhu, Rui Dong, Deguang Kong, Juliette Burger, Anjelica Ramos, zhiheng huang,
William Yang Wang, George Karypis, Bing Xiang,
[and Dan Roth. 2023. STREET: A MULTI-TASK](https://openreview.net/forum?id=1C_kSW1-k0)
[STRUCTURED REASONING AND EXPLANA-](https://openreview.net/forum?id=1C_kSW1-k0)
[TION BENCHMARK. In International Conference](https://openreview.net/forum?id=1C_kSW1-k0)
_on Learning Representations._
[Abulhair Saparov and He He. 2023. Language models](https://openreview.net/pdf?id=qFVVBzXxR2V)
[are greedy reasoners: A systematic formal analysis](https://openreview.net/pdf?id=qFVVBzXxR2V)
[of chain-of-thought. In The Eleventh International](https://openreview.net/pdf?id=qFVVBzXxR2V)
_Conference on Learning Representations, ICLR 2023,_
_Kigali, Rwanda, May 1-5, 2023. OpenReview.net._
Kazutoshi Shinoda, Saku Sugawara, and Akiko Aizawa.
[2021. Can question generation debias question an-](https://doi.org/10.18653/v1/2021.mrqa-1.6)
[swering models? a case study on question–context](https://doi.org/10.18653/v1/2021.mrqa-1.6)
[lexical overlap. In Proceedings of the 3rd Workshop](https://doi.org/10.18653/v1/2021.mrqa-1.6)
_on Machine Reading for Question Answering, pages_
63–72, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle
[Pineau, and William L. Hamilton. 2019. CLUTRR:](https://doi.org/10.18653/v1/D19-1458)
[A diagnostic benchmark for inductive reasoning from](https://doi.org/10.18653/v1/D19-1458)
[text.](https://doi.org/10.18653/v1/D19-1458) In Proceedings of the 2019 Conference on
_Empirical Methods in Natural Language Processing_
_and the 9th International Joint Conference on Natu-_
_ral Language Processing (EMNLP-IJCNLP), pages_
4506–4515, Hong Kong, China. Association for Computational Linguistics.
Zayne Sprague, Kaj Bostrom, Swarat Chaudhuri, and
[Greg Durrett. 2023. Deductive additivity for plan-](https://doi.org/10.18653/v1/2023.nlrse-1.11)
[ning of natural language proofs. In Proceedings of](https://doi.org/10.18653/v1/2023.nlrse-1.11)
_the 1st Workshop on Natural Language Reasoning_
_and Structured Explanations (NLRSE), pages 139–_
156, Toronto, Canada. Association for Computational
Linguistics.
Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2021.
[ProofWriter: Generating implications, proofs, and](https://doi.org/10.18653/v1/2021.findings-acl.317)
[abductive statements over natural language. In Find-](https://doi.org/10.18653/v1/2021.findings-acl.317)
_ings of the Association for Computational Linguis-_
_tics: ACL-IJCNLP 2021, pages 3621–3634, Online._
Association for Computational Linguistics.
Oyvind Tafjord, Bhavana Dalvi Mishra, and Peter Clark.
[2022. Entailer: Answering questions with faithful](https://doi.org/10.18653/v1/2022.emnlp-main.134)
-----
[and truthful chains of reasoning. In Proceedings of](https://doi.org/10.18653/v1/2022.emnlp-main.134)
_the 2022 Conference on Empirical Methods in Nat-_
_ural Language Processing, pages 2078–2093, Abu_
Dhabi, United Arab Emirates. Association for Computational Linguistics.
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, Ed H.
Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy
[Liang, Jeff Dean, and William Fedus. 2022. Emer-](https://openreview.net/forum?id=yzkSU5zdwD)
[gent abilities of large language models. Trans. Mach.](https://openreview.net/forum?id=yzkSU5zdwD)
_Learn. Res., 2022._
Jan Wielemaker, Tom Schrijvers, Markus Triska, and
Torbjörn Lager. 2012. SWI-Prolog. _Theory and_
_Practice of Logic Programming, 12(1-2):67–96._
[Kaiyu Yang, Jia Deng, and Danqi Chen. 2022. Gen-](https://doi.org/10.18653/v1/2022.emnlp-main.7)
[erating natural language proofs with verifier-guided](https://doi.org/10.18653/v1/2022.emnlp-main.7)
[search. In Proceedings of the 2022 Conference on](https://doi.org/10.18653/v1/2022.emnlp-main.7)
_Empirical Methods in Natural Language Processing,_
pages 89–105, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
Sen Yang, Xin Li, Leyang Cui, Lidong Bing, and
Wai Lam. 2023. Neuro-symbolic integration brings
causal and reliable reasoning proofs. arXiv preprint.
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio,
William Cohen, Ruslan Salakhutdinov, and Christo[pher D. Manning. 2018. HotpotQA: A dataset for](https://doi.org/10.18653/v1/D18-1259)
[diverse, explainable multi-hop question answering.](https://doi.org/10.18653/v1/D18-1259)
In Proceedings of the 2018 Conference on Empiri_cal Methods in Natural Language Processing, pages_
2369–2380, Brussels, Belgium. Association for Computational Linguistics.
Wen-tau Yih, Matthew Richardson, Chris Meek, Ming[Wei Chang, and Jina Suh. 2016. The value of se-](https://doi.org/10.18653/v1/P16-2033)
[mantic parse labeling for knowledge base question](https://doi.org/10.18653/v1/P16-2033)
[answering. In Proceedings of the 54th Annual Meet-](https://doi.org/10.18653/v1/P16-2033)
_ing of the Association for Computational Linguistics_
_(Volume 2: Short Papers), pages 201–206, Berlin,_
Germany. Association for Computational Linguistics.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali
Farhadi, and Yejin Choi. 2019. Hellaswag: Can a
machine really finish your sentence? In Proceedings
_of the 57th Annual Meeting of the Association for_
_Computational Linguistics._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V. Le, and Ed H.
[Chi. 2023. Least-to-most prompting enables com-](https://openreview.net/pdf?id=WZH7099tgfM)
[plex reasoning in large language models. In The](https://openreview.net/pdf?id=WZH7099tgfM)
_Eleventh International Conference on Learning Rep-_
_resentations, ICLR 2023, Kigali, Rwanda, May 1-5,_
_2023._
-----
**A** **Formal definition of SymBa**
In this section, we provide an algorithmic description of SymBa. SymBa can be viewed as an extension of the SLDNF resolution (Selective Linear
Definite resolution with Negation as Failure) algorithm (Apt and Doets, 1992) typically used in
top-down solvers like SWI-Prolog (Wielemaker
et al., 2012). A simplified pseudo-code for SymBa
is presented in Algorithm 1. The notations used
throughout this section are presented in Table 6.
Notation Definition
_h, p, q_ Term (proposition)
T Set of all terms
_B_ Binding (mapping from variables to variables/constants)
_B_ List of bindings.
B Set of all bindings.
**s** Statement (rule, fact)
**s.head** Rule head (term)
**s.body** Rule body (list of terms)
_C_ Context written in natural language
Table 6: Notations used in Appendix A.
Before proceeding to the algorithm, we introduce three procedures about unification and binding, namely UNIFY : T × T →{0, 1}, BINDING :
T × T → B, and BIND : T × B → T. As
described in Section 2.2, two terms are said to
unify if there is a valid binding that makes the
terms identical. UNIFY returns a boolean value
indicating whether the two terms unify or not.
BINDING returns the binding of two terms if they
unify. BIND takes a term (possibly containing
variables) and a binding as its argument, and returns the bound term after substituting the variables from the term to the corresponding values.
By definition, for any two terms p and q that
satisfy UNIFY(p, q), BIND(p, BINDING(p, q)) =
BIND(q, BINDING(p, q)) should always hold.
SOLVE is the main procedure of SymBa. It receives a query term q as a parameter and refers to
the global database (set of statements) D to compute _final, the list of all provable bindings for q. If_
_B_
_Bfinal is not empty, it implies that q can be proved_
on D. Otherwise, the query cannot be proved.
Negation is handled first, in Lines 5-12. In the
negation-as-failure semantics, the negation not q
succeeds when q fails, and vice versa. Therefore,
whenever the query is negated (i.e. not qpos), its
non-negative dual (i.e. qpos) is proved first (Line
6). When the proof succeeds, the negated goal
should be failed, therefore an empty list ( _final) is_
_B_
returned (Line 8). When the proof fails, an empty
binding is added to the Bfinal to indicate success
of the original query.
The main loop is shown in Lines 13-31. First, the
statements that have heads unifying with the query
are selected from the database. The initial binding
_B0 is the binding between the statement’s head_
and the query. For each subgoal pt, we bind the
subgoal using the previous binding Bt 1,i (Line
_−_
19). The partially bound subgoal pt,i is proved
by recursively calling SOLVE, which returns a list
of bindings for pt,i (Line 20). The new bindings
_Bt,i,j are added to original binding Bt,i (Line 22),_
which are then propagated to the next subgoal pt+1.
When all subgoals are proved, the query is proved,
and the bindings are added to the answer set (Line
27). Note that if the query contains variables, these
bindings can be used to bind the query to obtain the
list of possible ’solutions’, as presented in Lines
41-45.
Single-step statement generation, the novel
mechanism of SymBa, is shown in Lines 32-38.
The flag isProved denotes whether the solver has
succeeded in finding a statement that unifies with
the query. If the value is false, the single-step statement generation (SINGLESTEPSTMTGEN) process
described in Section 3.2 is called, which is expected
to return a new statement snew from the context C
and the query q. If the procedure succeeds, snew is
added to D, and the solver re-attempts to solve q
with the updated database.
If the negation-as-failure succeeded (Line 10), it
cannot be determined if the positive query is truly
unprovable because queries that have never been
previously addressed will always fail. Therefore,
the isProved flag remains false in this case, which
will later invoke the single-step statement generation.
For brevity, here we do not further describe additional features, namely comparison operators, odd
loop on negation (OLON) (Marple et al., 2017),
goal tabling (to prevent duplicate calls and infinite
recursion), and proof tree generation. Full imple[mentation of SymBa can be found in this reposi-](https://osf.io/g9h42/?view_only=74ab8cc288404502bd2d820820ad9426)
[tory.](https://osf.io/g9h42/?view_only=74ab8cc288404502bd2d820820ad9426)
**B** **Dataset details**
This section describes the sampling, preprocessing,
and evaluation of benchmarks. Table 7 presents
brief information and statistics about the seven
benchmarks used in this paper.
All datasets used in this study allow free
-----
**Algorithm 1 Algorithm of SymBa**
1: global D ← _empty set_
2: procedure SOLVE(q) _▷_ Input: query term, Returns: list of bindings
3: _final_ _empty list_
_B_ _←_
4: _isProved ←_ **false**
5: **if q is negated (i.e. not qpos) then**
6: _pos_ SOLVE(qpos)
_B_ _←_
7: **if Bpos is empty then**
8: **return empty list** _▷_ NAF fail
9: **else**
10: Append empty binding to _final_
_B_
11: **end if**
12: **end if**
13: _S ←{s ∈D | UNIFY(s.head, q)}_ _▷_ Set of statements that have heads unifying with q
14: **for s ∈S do**
15: _B0 ←_ [BINDING(s.head, q)]
16: **for pt ∈** **s.body = [p1, ..., pT ] do**
17: _t_ _empty list_
_B_ _←_
18: **for Bt** 1,i _t_ 1 = [Bt 1,0, ..., Bt 1,I ] do
_−_ _∈B_ _−_ _−_ _−_
19: _pt,i ←_ BIND(pt, Bt−1,i) _▷_ Apply bindings from head & previous subgoals
20: _t,i_ SOLVE(pt,i) _▷_ Solve the partially bound subgoal
_B_ _←_
21: **for Bt,i,j** _t,i = [Bt_ 1,0, ..., Bt 1,J ] do
_∈B_ _−_ _−_
22: _Bt,i,j ←_ _Bt,i,j ∪_ _Bt−1,i_ _▷_ Update the binding
23: **end for**
24: Extend Bt,i to Bt
25: **end for**
26: **end for**
27: Extend BT to Bfinal
28: **if BT is not empty then**
29: _isProved ←_ **true**
30: **end if**
31: **end for**
32: **if isProved then** _▷_ Subgoal success
33: **return** _final_
_B_
34: **else** _▷_ Subgoal failure
35: **snew ←** SINGLESTEPSTMTGEN(C, q)
36: Add snew to D
37: **return SOLVE(q)**
38: **end if**
39: end procedure
40:
41: C ← _user input_
42: qinit _user input_
_←_
43: B ← SOLVE(q)
44: for qfinal BIND(qinit, B) _B_ **do**
_∈{_ _|_ _∈B}_
45: **print qfinal**
46: end for
-----
|Dataset|Type|Test size|Avg. steps|Avg. sents|N-shot|
|---|---|---|---|---|---|
|ProofWriter (Tafjord et al., 2021) Birds-Electricity (Ibid.) ParaRules (Clark et al., 2020) PrOntoQA (Saparov and He, 2023) CLUTRR (Sinha et al., 2019) MAWPS (Koncel-Kedziorski et al., 2016) GSM8k (Cobbe et al., 2021)|Deductive Deductive Deductive Deductive Relational Arithmetic Arithmetic|300 300 300 100 100 300 270|4.52 2.08 4.37 4.00 4.86 3.06 9.22|19.12 13.77 10.56 21.84 5.20 3.20 4.87|3 3 3 3 3 5 5|
Table 7: Statistics of each test set. Avg. steps denotes the average number of statements (facts and rules) required to
prove the goal, and Avg. sents is the average number of sentences that each context contains. N-shot denotes the
number of few-shot examples to prompt LLMs in this study.
use, modification, and redistribution for noncommercial applications.
**B.1** **ProofWriter family**
**Test split sampling From the ProofWriter fam-**
ily, we sample the evaluation set from the test
split of the closed-world assumption subset (CWA).
Specifically, for ProofWriter, we use the dep5 subset, which has a deepest maximum reasoning depth
of 5. Since a single context includes multiple questions, we first sample 300 contexts and randomly
sample a question from it. As a result, we obtain
300 (context, question) tuples for each dataset).
**In-context demonstrations We randomly sam-**
ple 3 examples from ProofWriter-dep3 and -dep2
data that contain shorter contexts to test the length
generalization ability of each method. For CoT
prompting and Least-to-most prompting, we provide the pre-order traversal of the golden proof tree
provided for each instance, with stopwords like
_since and so that are known to enhance the perfor-_
mance in CoT prompting (Kazemi et al., 2023). For
LAMBADA, we use the prompt format provided
in the original paper, which is populated with the
sampled in-context examples.
**Logic** **program** We consistently apply
verb(subject, object) format to both datasets.
For instance, Bald eagle does not eat the mouse.
translates to not eats(bald_eagle, mouse).
Note that we apply the same format for adjective
facts. For example, the corresponding symbolic
form for Alan is young. is is(alan, young),
opposed to another commonly used form
young(alan) or young(alan, true) (Olausson
et al., 2023; Pan et al., 2023).
As a common practice for measuring the reasoning ability in out-of-distribution data (BirdsElectricity, ParaRules) using in-domain data
(ProofWriter) (Tafjord et al., 2021), we use the
prompts and examples sampled from ProofWriter
train split for the other two benchmarks.
**Evaluation We use the true/false labels provided**
with the original dataset without modification.
**B.2** **PrOntoQA**
**Test split sampling We sample the test set using**
the original script from Saparov and He (2023),
using fictional entity names (e.g. Every yumpus is
_a jompus.). However, due to an unresolved issue_
of the script, the script only allows to generate a
reasoning chain of a maximum of four steps.
**In-context demonstrations Similar to the**
ProofWriter family, we use few-shot demonstrations with 8 premises, which is significantly lower
than average (NN premises).
We use identical logic program formats and
evaluation criteria for PrOntoQA with other
ProofWriter variants.
**B.3** **CLUTRR**
**Test split sampling We randomly sample 100**
examples from the test split of CLUTRR v1. To
generate false labels, we sample half of the examples and alter the relation label of the gold triplet
to a random one.
**In-context demonstrations We randomly sam-**
ple 3 stories from the train split that only contains
2-3 relations to test the length generalization ability
of each methods. For CoT, we provide a golden
chain of kinship relations that connect the two
queried entities. For Least-to-most prompting, each
decomposed question contains information about
an entity and a relation, asking for the bridging
entity. (e.g. Who is the father of Andrea?)
**Logic program and expert system We in-**
troduce 39 manually crafted rules about family relationships. To reduce excessive recursion, we use separate predicate names for the
base fact and inferred relations. For instance,
’George is the father of Andrea.’ is translated as isRelationOf(george, father, andrea)
if it is a fact directly from the context, or
-----
relation(george, father, andrea) if it is inferred by more than one bridging entities. Note
that the predicate name for the latter casts no effect on the performance as it is only used for the
symbolic solver and not the LLM.
Examples of the expert system rules are
presented as follows. Note that the semicolon(;)
denotes that the rule conditions are satisfied when
either of the groups is satisfied (disjunction).
relation(A, R, B) :isRelation(A, R, B).
relation(A, son, B) :isRelationOf(A, brother, C),
relation(C, (son;daughter), B).
relation(A, daughter, B) :isRelationOf(A, sister, C),
relation(C, (son;daughter), B).
...
**Evaluation Each model is instructed to predict**
if the label is correct or not (randomized).
**B.4** **MAWPS**
**Test split sampling We use the first 300 exam-**
ples from the original test split.
**In-context demonstrations Five few-shot ex-**
amples are randomly sampled from the train split.
We manually create annotations as the benchmark
does not include a reasoning chain.
**Logic** **program** We denote the meaning of each numeric value with predicates
of arity 1, as in number_of_oranges(_) or
fraction_of_trombone_section(_). We use
answer(X) to express the final answer in all examples and evaluate if the variable X is successfully
bound to the right numeric value (e.g. answer(5)).[3]
Facts denote the base value mentioned in the
text (e.g. number_of_yellow_flowers(10)), and
rules express the arithmetic relations between each
value (e.g. fraction_of_trumpet_section(X)
:- fraction_of_trombone_section(A),
_X = A ∗_ 4.).
**Evaluation We use the numeric answer provided**
with the original dataset. If the answer is not a numeric string (e.g. 25,000 or 42 pages), they are
considered incorrect. While Standard prompting
exceptionally suffers from this constraint, we claim
3While previous approaches in logic programmingintegrated LLMs use an additional step to specify which predicate corresponds to the final answer (Pan et al., 2023), we do
not introduce this mechanism for universality.
that it is not unfair as each method is equally presented with 5-shot examples in the correct format.
**B.5** **GSM8k**
**Test split sampling We use the test split used**
in Yang et al. (2023), which contains 270 examples and is a subset of the original test split from
Cobbe et al. (2021). We calculate the number of
reasoning steps presented in Table 7 based on the
semi-structured solutions included in the dataset.
**In-context demonstrations We randomly sam-**
ple 5 questions from the train split. For CoT
prompting, we used the answer column from the
original dataset and removed the external call snippets (equations that are wrapped in double angle
brackets «...»). For Least-to-most prompting, we
reformulate the answer column from the ‘Socratic’
version of the dataset that formulates the reasoning chain as consecutive sequence of questions and
answers.
We use identical logic program formats and evaluation criteria for GSM8k with MAWPS.
**C** **Complete results**
Table 8 presents the complete results of the main
experiment (Section 5.1). We also report the performance of Standard prompting (generating the answer without any rationales) and Chain-of-thought
prompting for comparison.
-----
|Model|Method|Performance|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|||ProofWriter|BirdsElec|ParaRules|PrOntoQA|CLUTRR|MAWPS|GSM8k|
|GPT-4|Standard CoT|63.2 ±0.43 70.5 ±2.13|77.8 ±1.17 81.2 ±1.41|61.3 ±1.10 60.5 ±1.03|83.0 ±0.82 96.8 ±1.26|72.0 ±4.00 †84.5 ±1.29|†94.2 ±0.58 †99.1 ±0.49|29.4 ±1.81 †94.2 ±1.00|
||Least-to-most LAMBADA SymBa|71.5 ±2.10 69.7 ±1.18 79.8 ±1.06|88.2 ±0.76 83.4 ±1.20 94.4 ±0.62|71.8 ±0.71 59.7 ±1.30 79.2 ±1.12|87.5 ±1.29 96.0 ±1.41 96.3 ±1.26|81.5 ±0.58 X 84.3 ±2.06|84.3 ±0.56 X 86.7 ±0.69|60.6 ±1.96 X 63.8 ±0.74|
|Claude-3|Standard CoT|61.3 ±0.00 67.0 ±2.00|66.0 ±0.00 73.3 ±0.00|61.3 ±0.00 57.3 ±0.00|†96.0 ±0.00 †96.0 ±0.00|80.0 ±0.00 67.0 ±0.00|†96.3 ±0.00 88.0 ±0.00|17.0 ±0.00 †92.2 ±0.00|
||Least-to-most LAMBADA SymBa|60.3 ±0.00 69.3 ±0.00 77.6 ±0.00|75.7 ±0.00 62.7 ±0.00 77.3 ±0.00|57.3 ±0.00 57.7 ±0.00 69.0 ±0.00|86.0 ±0.00 67.0 ±0.00 91.0 ±0.00|67.0 ±0.00 X 85.0 ±0.00|94.2 ±0.15 X 94.1 ±0.15|59.3 ±0.00 X 67.4 ±0.00|
|LLaMa-3|Standard CoT|63.6 ±0.50 64.8 ±1.26|78.7 ±0.00 79.0 ±1.29|65.3 ±0.00 63.0 ±1.67|†99.0 ±0.00 92.5 ±4.12|75.0 ±0.00 77.0 ±0.00|†96.3 ±0.00 †95.0 ±0.00|26.2 ±0.00 †89.5 ±1.35|
||Least-to-most LAMBADA SymBa|61.4 ±0.34 64.0 ±1.63 70.4 ±1.26|71.0 ±0.00 82.3 ±0.00 92.9 ±1.10|66.7 ±0.00 62.1 ±1.10 71.7 ±0.00|95.0 ±0.00 90.8 ±0.50 93.3 ±0.50|72.0 ±0.00 X 90.5 ±0.58|89.0 ±0.00 X 87.9 ±0.70|61.5 ±0.00 X 67.0 ±0.00|
Table 8: Average accuracy (%) and standard deviation on 4-runs per each benchmark and reasoning methods.
Boldface font indicates that the score is significantly higher than other backward chaining methods, which is
equivalent to the boldface in Table 2. Daggers represent that non-structured methods (Standard, Chain-of-thought)
achieves significantly higher score than the best structured backward chaining results. 95% confidence applies to
both notations. Note that the temperature was set to 0 for all runs, which results in zero standard deviation in some
settings.
-----
| [
"Jinu, Lee",
"Wonseok, Hwang"
] | 2024-02-20T00:00:00 | null | false | 1 | 0 | null | https://arxiv.org/abs/2402.12806v2 | https://arxiv.org/abs/2402.12806 | https://www.semanticscholar.org/paper/9f64c85d2f857d2950ca57c1ec352c7557c78e80 |
Task Oriented In-Domain Data Augmentation | Large Language Models (LLMs) have shown superior performance in various applications and fields. To achieve better performance on specialized domains such as law and advertisement, LLMs are often continue pre-trained on in-domain data. However, existing approaches suffer from two major issues. First, in-domain data are scarce compared with general domain-agnostic data. Second, data used for continual pre-training are not task-aware, such that they may not be helpful to downstream applications. We propose TRAIT, a task-oriented in-domain data augmentation framework. Our framework is divided into two parts: in-domain data selection and task-oriented synthetic passage generation. The data selection strategy identifies and selects a large amount of in-domain data from general corpora, and thus significantly enriches domain knowledge in the continual pre-training data. The synthetic passages contain guidance on how to use domain knowledge to answer questions about downstream tasks. By training on such passages, the model aligns with the need of downstream applications. We adapt LLMs to two domains: advertisement and math. On average, TRAIT improves LLM performance by 8% in the advertisement domain and 7.5% in the math domain. | TRAIT, a task-oriented in-domain data augmentation framework, is proposed and adapted to two domains: advertisement and math, which improves LLM performance by 8% in the advertisement domain and 7.5% in the math domain. | ## TASK ORIENTED IN-DOMAIN DATA AUGMENTATION
**Xiao Liang[1][,][3][∗], Xinyu Hu[2][∗], Simiao Zuo[2], Yeyun Gong[3][†], Qiang Lou[2], Yi Liu[2],**
**Shao-Lun Huang[1], Jian Jiao[2][†],**
1Tsinghua University 2Microsoft AI 3Microsoft Research
ABSTRACT
Large Language Models (LLMs) have shown superior performance in various applications and fields. To achieve better performance on specialized domains such
as law and advertisement, LLMs are often continue pre-trained on in-domain data.
However, existing approaches suffer from two major issues. First, in-domain data
are scarce compared to general domain-agnostic data. Second, data used for continual pre-training are not task-aware, such that they may not be helpful to downstream applications. We propose TRAIT, a task-oriented in-domain data augmentation framework. Our framework is divided into two parts: in-domain data selection and task-oriented synthetic passage generation. The data selection strategy
identifies and selects a large amount of in-domain data from general corpora, and
thus significantly enriches domain knowledge in the continual pre-training data.
The synthetic passages contain guidance on how to use domain knowledge to answer questions about downstream tasks. By training on such passages, the model
aligns with the need of downstream applications. We adapt LLMs to two domains:
advertisement and math. On average, TRAIT improves LLM performance by 8%
in the advertisement domain and 7.5% in the math domain.
1 INTRODUCTION
Large language models (LLMs) have achieved significant performance improvements in various applications such as language modeling (Brown et al., 2020; Touvron et al., 2023; Chowdhery et al.,
2023) and visual understanding (Radford et al., 2021). They have also shown superior performance
in fields such as finance (Xie et al., 2023b), e-commerce (Ma et al., 2023) and healthcare (Bakhshandeh, 2023). However, the models are usually trained on a large amount of general domain-agnostic
data, such as web corpora. Because of the lack of domain-specific training, LLMs suffer from subpar
performance when directly applied to certain domains such as advertisement.
To adapt LLMs to a specific domain, continual pre-training methods (Gururangan et al., 2020) are
commonly applied. In particular, the LLM is continual pre-trained on in-domain corpora, such that it
can acquire domain knowledge and better adapt to downstream tasks. Existing works (Cheng et al.,
2023) have shown that such a technique drastically improves performance of LLMs on domains
such as law and bio-medicine.
There are two major issues when continual pre-training LLMs. First, in-domain data are scarce.
LLMs are pre-trained on large domain-agnostic corpora. For example, the web corpus used for
pre-training contains more than ten trillion tokens. However, domain-specific data are magnitudes
smaller, i.e., the ads in-domain corpus in our experiments contains only several billion tokens. Such
a data scarcity issue significantly hinders model performance after continual pre-training.
Second, in-domain data used for continual pre-training are not task-oriented. Many existing works
(Achiam et al., 2023; Li et al., 2023; Liu et al., 2024; Shao et al., 2024) focus on generating or
selecting in-domain data without considering the downstream tasks. That is, the continual pretraining data are often passages that describe keywords/concepts of the target domain, which are
generated or selected without considering whether the passages benefit downstream tasks.
_∗Equal contribution. Work done during Xiao Liang’s internship at Microsoft Research Asia._
_†Corresponding authors._
-----
We propose TRAIT (Task oRiented in-domAIn data augmenTation), a data augmentation framework driven by downstream tasks of the domain. The framework is divided into two parts. First,
to address the data scarcity issue of in-domain corpora, we propose a data selection strategy. The
proposed algorithm identifies in-domain data from general corpora, and also applies a quality filter
to ensure that the selected data have high educational value (Gunasekar et al., 2023). In practice,
the amount of selected data is magnitudes larger than the in-domain dataset. For example, for the
advertisement applications, the in-domain dataset contains about 1B tokens, and TRAIT selects an
additional 15B tokens from web corpora.
Second, we propose a task-oriented synthetic passage generation guideline. Specifically, each generated passage contains multiple problems, where each problem comes from a different downstream
task from the domain. Then, for each problem, TRAIT generates a problem-specific paragraph that
suggests possible answers to the problem. Additionally, the synthetic passage also contains an en_lightenment paragraph. This paragraph focuses on relationships among problems in the passage,_
including common and individual characteristics that are used to generate answers. Intuitively, the
problem-specific paragraphs teach the model how to use techniques to solve a particular problem.
And the enlightenment paragraph teaches the model common and unique aspects of all problems in
the domain.
To fully exploit the power of TRAIT, we employ a two-stage training strategy for continual pretraining. In the first stage, the model is trained with in-domain data, including both the original
in-domain corpora and the selected data. In this stage, the model adapts to the domain by learning
domain knowledge. Then, in the second stage, the model is trained with the task-oriented passages.
During this stage, the model learns how to use domain knowledge to solve problems, such that it
better aligns with the need of downstream tasks.
We conduct extensive experiments by adapting LLMs to two domains: advertisement and math. For
the advertisement domain we consider 7 downstream tasks, and TRAIT improves existing continual
pre-training methods by 6.5% average score, while improving the base LLM (without continual pretraining) by 8%. For the math domain we consider 9 downstream tasks, where TRAIT outperforms
the baseline by 5.5% average score and outperforms the base LLM by 7.5%. For the challenging
MATH task in math domain, TRAIT outperforms the base LLM by over 15%.
2 METHOD
2.1 OVERVIEW
We propose TRAIT, a data augmentation framework with two components. First, we propose a data
selection strategy to select in-domain data from general corpus. In this way, we can significantly enlarge the domain-specific training data, such that the data contain more domain knowledge compared
with the original small in-domain corpus. Second, we propose a guideline to generate task-oriented
passages from in-domain data. The synthetic passages focus on using domain knowledge to solve
the given tasks.
We use both the in-domain data and the synthetic passages to continual pre-train LLMs. Specifically,
we first train the model on in-domain data, such that the model can learn more domain knowledge.
Then, we train the model on the task-oriented synthetic passages. During this stage, the model learns
to solve downstream tasks using the acquired domain knowledge. The proposed data augmentation
framework and training strategy can drastically improve model performance by adapting LLMs to
specific domains.
2.2 IN-DOMAIN DATA SELECTION
In practice, the size of general corpus is orders of magnitude larger than domain-specific corpus.
For example, the ads domain corpus contains about 1B tokens in our experiments, while the general
web corpus contains trillions of tokens. To alleviate such a data scarcity issue, we propose to select
in-domain data from general corpus.
We train a FastText (Joulin et al., 2017) classifier to identify in-domain data from large amount of
domain agnostic data. Specifically, to train the FastText classifier, we select a certain number of
-----
Figure 1: An example of a task-oriented synthetic passage on the ads domain. Left: two downstream
tasks (Query Rewriting and Query-LandingPage Relevance) and inputs. Right: the structure of the
generated passage, including two problem-specific paragraphs and an enlightenment paragraph.
in-domain data as positive samples and the same amount of out-of-domain data as negative samples.
The trained binary classifier is then used to select in-domain data from the general corpus (e.g., the
web corpus).
We apply a filter to ensure that the in-domain data (both the original in-domain corpus and the
selected data) have high educational value (Gunasekar et al., 2023). In this way, we can boost the
quality of the filtered in-domain data, which in turn improves performance of the models.
The proposed data selection strategy has two benefits. First, it can significantly enrich in-domain
data. In practice, the amount of selected data is magnitudes larger than the in-domain dataset. For
example, the original ads domain corpus contains about 1B tokens in our experiments, and we
select an additional 15B tokens from the web corpus (after selection and filtering). Second, the data
selection strategy enables replay (Ibrahim et al., 2024), such that generality of LLMs is largely kept
after continual pre-training (see Table 5 for experiments). In more details, for a specific LLM, replay
happens when the continual pre-training data contain a certain amount of pre-training data (e.g., the
web corpus). It has been observed that replay is crucial to keep LLM’s generality (e.g., instruction
following) after training.
2.3 SYNTHETIC DATA GENERATION
To adapt LLMs to a specific domain, we first train the model on in-domain data, such that the model
can acquire domain knowledge. Another key aspect crucial to model performance is the model’s
ability to use such knowledge to solve downstream tasks. To address this issue, we propose a guideline to generate task-oriented synthetic passages. In this way, the model is aware of downstream
tasks during continual training, and thus model performance can be significantly improved. In the
next section, we describe how to generate the task-oriented passages in detail.
3 TASK-ORIENTED PASSAGE GENERATION
3.1 GUIDELINE
The goal of the synthetic passages is to describe how to solve downstream tasks using domain knowledge. In TRAIT, each synthetic passage describes the common and individual characteristics of all
downstream tasks in a domain. This resembles human learning: we learn how to solve individual
problems, as well as common knowledge that can be applied to all problems.
We propose a guideline to generate task-oriented synthetic passages:
-----
Figure 2: An example of a task-oriented synthetic passage on the math domain. Left: the selected
two tasks (GSM8k and SAT) with an example problem from each task. Right: the structure of the
generated passage, including two problem-specific paragraphs and an enlightenment paragraph.
_⋄_ We build each passage using several problems, where each problem comes from a different downstream task.
_⋄_ Within a passage, for each problem we generate a problem-specific paragraph that suggests possible answers to the problem. Different prompts are used to generate paragraphs for different tasks,
while the same prompt is used for problems from the same task.
_⋄_ For each passage, we generate an enlightenment paragraph. This paragraph emphasizes relationships among problems, including shared and individual characteristics that are used to generate
answers to the problems.
The enlightenment paragraph requires summarizing common and individual aspects of different
problems from different downstream tasks. This is natural for certain domains. For example, in
the ads domain in Figure 1, different questions in the passage ask about different aspects of the
_same query. As another example, in the finance domain, different features of the same company_
may be useful for different tasks. We call these domains entity-centered. These domains focus on
**understanding of entities from various aspects.**
On the other hand, in domains such as math and physics, the common aspects of different problems
are not entities, but knowledge or techniques that can be applied to solve the problems. For example,
in the math domain, each passage may contain multiple questions that require different techniques
to solve, e.g., the GSM8k benchmark focuses on simple arithmetic, while the MATH benchmark
focuses on logical reasoning. We call these domains knowledge-centered. These domains focus on
using universal knowledge to solve problems.
3.2 EXAMPLE: TASK-ORIENTED PASSAGE FOR ENTITY-CENTERED DOMAINS
Recall that each synthetic passage is divided into two parts: problem-specific paragraphs and an
enlightenment paragraph. We use ads domain as an example to illustrate how the two components
are generated.
We select problems from different downstream tasks in the ads domain. In Figure 1, the passage
contains two tasks (or two problems): Query Rewriting and Query-LandingPage Relevance.
For Query Rewriting, the task is to generate variations that maintain the search intent but diversify
the expression. For the query ”Machu Picchu tour packages luxury”, the generated problem-specific
paragraph looks like: Potential rewrites for the query could include “luxury Machu Picchu travel
_package”, “high-end tours to Machu Picch”._
For Query-LandingPage Relevance, the task is to decide whether the content of the landing page (the
webpage to which the user is directed after clicking on an ad) addresses the intent of the search query.
For the query ”Machu Picchu tour packages luxury” and the landing page ”The experts in boutique
_travel ...”, the generated problem-specific paragraph looks like: The landing page details, such as_
-----
_the expertise of the travel specialists ... directly correspond to the user’s search for a luxurious_
_Machu Picchu tour, demonstrating a strong relevance._
We also generate an enlightenment paragraph that focuses on relationships among the downstream
tasks. For example, in Figure 1, the enlightenment paragraph is: The shared learning is the impor_tance of the luxury and personalized aspects of the travel service. For Query Rewrites, focusing on_
_synonyms related to luxury and high-end services. Ensuring high relevance in Query-Landing Page_
_Relevance emphasizes the high-quality travel experiences to Machu Picchu._
The enlightenment paragraph severs as a central tenant of intelligence. It demonstrates which aspects
of the entity are needed in all downstream tasks, and which aspects are tailored for a specific task.
Such an explicit signal significantly boosts model performance. For example, for some downstream
tasks in the ads domain, adding the enlightenment paragraph brings a 3% performance gain (see
Table 3 for details).
3.3 EXAMPLE: TASK-ORIENTED PASSAGE FOR KNOWLEDGE-CENTERED DOMAINS
Different from entity-centered domains such as ads and finance, in knowledge-centered domains
such as math and physics, the focus is on applying universal knowledge to solve problems. We use
math domain as an example to demonstrate how we generate the task-oriented passages.
In Figure 2, we select two problems from the GSM8k and the SAT benchmarks. Then, we generate
problem-specific paragraphs to solve the problems. Similar to the ads domain, in the enlightenment
paragraph we summarize the common and individual techniques that are used to solve the problems.
In more details, the enlightenment paragraph states that the common knowledge used to solve the
two problems is ”algebraic operation for functions”. Each task requires additional techniques: the
GSM8k problem requires ”manipulating a linear function, while the SAT problem requires ”solving
_exponential equation”._
We remark that in knowledge-centered domains, the model learns universal knowledge that can
be applied to all problems, i.e., the techniques learned from one problem are applicable to other
problems. Therefore, the purpose of the enlightenment paragraph is to communicate about the
techniques required to solve problems. The paragraph explicitly points out the common techniques
needed by all downstream tasks, such that the model gains better awareness of the importance of
such techniques. The model also learns task-specific techniques as pointed out by the enlightenment
paragraph.
4 DATA PREPARATION
We apply our data augmentation method, TRAIT, to two domains: advertisement (ads) and math.
In this section, we detail the process for in-domain data selection and synthetic passage generation.
Refer to Appendix. D for examples and more details.
4.1 TASK-ORIENTED PASSAGE GENERATION
Following the guidelines for task-oriented passage generation, we select problems from each downstream task and use GPT-4 to generate the full passage (prompt details can be found in Appendix. D).
This approach is adaptable to any new target domain, leveraging GPT-4’s understanding of various
domains and its ability to handle diverse tasks within them. By selecting relevant problems and
utilizing our generation prompt, our method ensures effective application across multiple domains.
4.2 IN-DOMAIN DATA SELECTION
**Ads domain. We train a domain-specific FastText classifier for in-domain data selection, as detailed**
in Section 2.2. First, we randomly select 500k positive samples from the ads in-domain corpus. We
also select 500k negative samples from Slimpajama (Soboleva et al., 2023), alpaca (Taori et al.,
2023), OpenHermes-2.5 (Teknium, 2023) and Tulu-v2 (Ivison et al., 2023). Then, for model training, we set the model dimension to 256, learning rate to 0.1, the maximum word n-gram length to 3,
the minimum word occurrence to 3, and the epoch to 3. Next, we apply the trained classifier to select
samples with the highest scores from fineweb (Penedo et al., 2023). Finally, we apply a quality filter
-----
|Method|QAC QLP Auc|QR Den. Div.|AG DG TG TR Win Rate (%)|Avg. △|
|---|---|---|---|---|
Few-shot Results
|Mistral-7B Random DSIR TRAIT|69.48 59.54 63.94 60.29 60.03 60.27 65.18 65.91|– – – – – – – –|– – – – 42.20 55.15 53.62 47.75 57.43 50.18 50.88 51.95 51.55 60.42 54.60 55.10|– -1.54% +1.41% +7.97%|
|---|---|---|---|---|
Fine-tuned Zero-shot Results
|Mistral-7B Random DSIR TRAIT|82.93 78.81 83.44 78.83 84.10 79.96 84.40 80.71|5.29 3.06 5.45 3.29 5.48 3.36 5.57 3.36|– – – – 46.02 52.35 50.22 48.55 50.58 49.38 51.98 50.32 50.15 51.95 52.98 54.43|– +0.68% +2.60% +4.79%|
|---|---|---|---|---|
Table 1: Evaluation results of downstream tasks in the ads domain. Here, Avg △ is the average
relative improvement over all evaluation metrics for all tasks. Best results highlighted in bold.
|Method|MMLU GSM8K MATH† SVAMP ASDiv MAWPS TAB MQA SAT STEM|Avg.|
|---|---|---|
|Base Random DSIR TRAIT|40.9 12.4 65.4 68.5 87.4 52.7 34.6 49.3 65.6 34.8 14.0 60.4 65.2 82.4 39.7 34.9 46.4 56.2 46.4 22.4 64.5 72.7 88.0 47.1 38.6 43.2 71.9 56.4 28.0 71.8 76.0 89.5 53.1 46.1 49.5 75.0|53.0 48.2 55.0 60.5|
Table 2: Few-shot CoT reasoning results of downstream tasks in the math domain. For MATH[†],
evaluation is performed on OpenAI’s MATH subset (Lightman et al., 2023), as the original test
samples may be included in public training sets. Best results highlighted in bold.
to the data to ensure that each sample has an educational value over 1.5 (Gunasekar et al., 2023),
where in total we select 15B filtered tokens.
**Math domain. The domain-specific FastText classifier is trained similar to that in the ads domain.**
For the positive samples, we sample 200k examples from open-source benchmarks (such as GSM8k
and SAT) and 200k samples from OpenWebMath (Paster et al., 2023). The negative samples are
constructed similar with the ads domain. Due to the scarcity of math-related content in the general
corpus, we retrieved the math data from a combination of MathPile (Wang et al., 2023) and ProofPile-2 (Azerbayev et al., 2023), resulting in a collection of around 5.5 billion tokens.
5 EXPERIMENTS
We evaluate TRAIT by adapting LLMs to the ads and math domains via continual pre-training. In
[all the experiments, we use Mistral-7B (Jiang et al., 2023) as the base model. We compare TRAIT](https://huggingface.co/mistralai/Mistral-7B-v0.1)
with two data selection baselines: (1) Random sampling, which randomly selects samples from
open-source general corpora; and (2) DSIR (Xie et al., 2023a), an importance sampling strategy for
selecting in-domain data from general corpora, such that the selected data distibutionally similar
with the in-domain data. To promote fair comparisons, all models (including the base Mistral7B model, baseline models, and TRAIT) are trained on the same amount of data with the same
computational budget. Details about the training process can be found in Appendix A.
5.1 ADS DOMAIN
**Downstream tasks. We consider seven tasks within the ads domain. There are two classification**
tasks: Query-AdCopy Relevance (QAC) examines the relevance of a user’s query to the ad copy,
while Query-LandingPage Relevance (QLP) assesses relevance between a user’s query and the advertisement’s landing page content. For generation tasks, we focus on generating dynamic content:
Query Rewriting (QR) generates rewrites of user queries, Ad Copy Generation (AG) creates complete ad copy directly, and both Description Generation (DG) and Title Generation (TG) develop
-----
|Col1|Ads Domain QAC QLP DG TG TR|Math Domain GSM8k SAT MAWPS MATH†|
|---|---|---|
|TRAIT w/o E.P. w/o two-stage|84.40 80.71 51.95 52.98 54.43 83.77 79.80 50.95 51.92 51.58 83.23 79.84 50.22 51.50 51.32|56.4 75.0 89.5 28.0 54.6 68.8 89.7 27.2 53.2 59.4 89.3 25.2|
|---|---|---|
Table 3: Effectiveness of the enlightenment paragraph and the two-stage training approach. We
adopt the fine-tuning settings for the ads domain and the few-shot settings for the math domain.
Here w/o E.P. means the model is trained without the enlightenment paragraphs.
concise descriptions and titles from the ad’s landing page information. Additionally, Title Rewriting
(TR) enhances user engagement by refining the ad title to better suit the user’s query and the original
title context.
**Evaluation settings. We evaluate TRAIT under both few-shot and fine-tuning settings. Each task**
contains 5k test samples. For the fine-tuning setting, each task contains 30k training samples.
_⋄_ For the two natural language understanding tasks (QAC and QLP), we adopt Area under curve
(Auc) as the evaluation metric.
_⋄_ For AG, DG, TG and TR, we use ChatGPT (OpenAI, 2022) to calculate the winning rate of TRAIT
compared with the Mistral-7B model. Specifically, we prompt ChatGPT to choose the better answer
from responses generated by our model and Mistral-7B. In order to mitigate ChatGPT’s positional
bias for evaluation (Chen et al., 2024), we swap the positions of the two responses and prompt
ChatGPT again to choose the better answer. We average the outcomes from the two rounds as the
final winning rate.
_⋄_ The evaluation metrics for QR consist of diversity (Div.) and density (Den.), with details provided
in Appendix C.
**Results. Experimental results are summarized in Table 1. From the results, we see that TRAIT sig-**
nificantly outperforms both the Mistral-7B model and the baselines. Specifically, TRAIT achieves
average increases of 8.0% and 4.8% across all downstream tasks compared with Mistral-7B in the
few-shot and fine-tuning settings, respectively. And the proposed framework outperforms the best
performing baseline by 6.5% and and 2.2% in the few-shot and fine-tuning settings, respectively.
In the few-shot setting, TRAIT outperforms all the baselines in 4/6 tasks; while in the fine-tuning
setting, the proposed framework performs the best in 6/7 tasks.
5.2 MATH DOMAIN
**Downstream tasks. We evaluate the models across nine mainstream benchmarks: GSM8k (Cobbe**
et al., 2021), MATH (Hendrycks et al., 2021), SVAMP (Patel et al., 2021), ASDIV (Miao et al.,
2021), MAWPS (Koncel-Kedziorski et al., 2016), TabMWP (TAB) (Lu et al., 2022), MathQA
(MQA) (Amini et al., 2019), MMLU-STEM (Hendrycks et al., 2020), and SAT (Azerbayev et al.,
2023). For evaluation, we adopt the math evaluation suite[1].
**Evaluation settings. For all tasks, we evaluate under the few-shot chain-of-thought (CoT) (Wei**
et al., 2022) setting. We use accuracy as the final evaluation metric.
**Results. As shown in Table 2, our continual pre-trained model achieves an absolute average accu-**
racy improvement of 7.5% across all benchmarks compared with Mistral-7B, with a significant gain
of 15.6% on the most challenging MATH benchmark. We remark that TRAIT outperforms all the
baselines in all the tasks.
5.3 ANALYSIS
**Effectiveness of TRAIT. In Figure 3, we see that the general data is far from the original in-**
domain data, indicating the necessity for domain-adaptive continual pretraining. The downstream
tasks are distributed in various clusters, in proximity to the in-domain data, but not fully covered by
[1math-evaluation-harness](https://github.com/ZubinGou/math-evaluation-harness)
-----
|In-D. Sel. Syn.|QAC QLP TG TR|
|---|---|
|Mistral-7B|82.93 78.81 – –|
|---|---|
|" " " " " " " " "|82.50 79.18 50.73 50.38 82.47 79.35 52.05 48.05 82.64 79.17 52.88 48.20 83.33 80.21 50.85 50.00 84.40 80.71 52.98 54.43|
|---|---|
Table 4: Performance of models continual pre-trained on different data. Models are evaluated on
the ads domain under the fine-tuning setting. Here, In-D. means the original in-domain corpus, Sel.
means selected in-domain data, and Syn. means synthetic passages.
Figure 3: Visualization of samples from the general corpus, the original in-domain ads corpus, ads
General
In-Domain
TRAIT
Downstream
downstream tasks, and TRAIT (including both selected in-domain data and synthetic passages).
We use Spacy (Honnibal & Montani, 2017) (left) and Mistral-7B (Jiang et al., 2023) (right) for
embedding, while using t-SNE (Van der Maaten & Hinton, 2008) for visualization.
it. For TRAIT, the mix of selected in-domain data and synthetic passages perfectly aligns with the
downstream tasks, reflecting the task-awareness nature of our approach.
In Table 4, we see the effect of each data component. The second row confirms the benefit of
the original in-domain data, showing an average 1% performance gain across all tasks. A more
notable contribution comes from TRAIT augmented data, with a nearly 5% gain observed, showing
effectiveness of our data augmentation strategy.
Moreover, the benefit of the enlightenment paragraph is significant, as shown in Table 3. It reinforces a deeper understanding of queries in the ads domain and focuses on shared problem-solving
techniques in the math domain.
**Two-stage vs. Single-stage training. The performance of downstream tasks during continual pre-**
training is documented in Figure 4. In the first stage, where the aim is to learn new in-domain
knowledge, we observe fluctuations in downstream performance as new knowledge, which may not
be directly relevant to the tasks, is acquired. In the second stage, the focus shifts to applying the
learned knowledge to solve downstream tasks directly, resulting in a larger upward trend in task
improvement. The overall benefit of adopting the two-stage training compared to a mixed single
stage is significant, as shown in Table 3.
**Generality after continual pre-training. Using the original in-domain data, the generality of LLMs**
deteriorates significantly, as shown in Table 5, with the model’s performance on BBH decreasing
from 55.91 to 49.84. In contrast, the model using TRAIT retains much of the generality. This is
because, during the first stage, we train on the selected in-domain data from the general corpus as a
-----
|Col1|BBH ARC HellaSwag AgiEval|
|---|---|
|Mistral-7B|55.91 70.55 61.25 32.64|
|---|---|
|In-Domain TRAIT|49.84 65.24 59.57 30.43 53.06 67.59 60.34 32.26|
|---|---|
Table 5: Few-shot evaluation of models trained on different data on general benchmarks. Here, In_Domain means the model is continual pre-trained on ads in-domain corpus (without selected data)._
64
62
60
58
56
54
52
50
48
46
56
54
52
50
48
46
44
|Col1|Col2|Col3|rs|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||Ou|rs|||||||
|||Dsi|r|||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
3000 6000 9000 12000 15000 18000
Ours
Dsir
Steps
|Col1|Col2|Col3|urs|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||O|urs|||||||
|||D|sir|||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
1000 2000 3000 4000 5000 6000
Steps
Figure 4: Left: The average winning rate of 4 ads generation tasks (AG, DG, TG and TR) during
continual pre-training. Right: The average few-shot accuracy of all math tasks during continual
pre-training.
knowledge replay (Ibrahim et al., 2024) and refocus on the target domain. Additionally, high-quality
synthetic data ensures the model is trained with extensive reasoning.
6 RELATED WORK
6.1 DATA AUGMENTATION FOR LANGUAGE MODELS
Data selection is essential for the effective training of LLMs, as it significantly influences their performance. Common data selection methods include heuristic-based quality filters (Computer, 2023;
Soldaini et al., 2024), lightweight classifiers (Joulin et al., 2017; Brown et al., 2020), and perplexity
(PPL)-based models (Heafield, 2011; Wenzek et al., 2019), which are often developed using curated
sources such as Wikipedia. For domain-specific data, techniques usually involve extracting information from the open web using heuristics or applying specialized classifiers to ensure relevance (Ma
et al., 2023; Xie et al., 2023b). Other approaches select data based on their added value compared to
typical examples from the target domain, or employ n-gram hash models to identify samples related
to that domain (Axelrod, 2017; Feng et al., 2022; Xie et al., 2023a).
The use of synthetic data is becoming a key strategy for augmenting the training of LLMs, particularly useful in areas like mathematical reasoning (Gou et al., 2023; Huang et al., 2024; Toshniwal
et al., 2024; Li et al., 2024a) and general instruction following (Wang et al., 2022; Xu et al., 2023;
Li et al., 2024b). The Phi series highlights the effectiveness of models trained solely on “textbook
quality” synthetic data (Gunasekar et al., 2023; Li et al., 2023; Abdin et al., 2024).
6.2 CONTINUAL PRE-TRAINING OF LLMS
Continual pre-training is increasingly recognized as an effective way to adapt large language models
(LLMs) incrementally to new data or changes in domain focus without complete retraining. This
method ensures the continuous integration of new knowledge, maintaining the model’s relevance
and effectiveness (Jin et al., 2021; Loureiro et al., 2022). Gururangan et al. (2020) has shown that
continual pre-training can significantly improve model performance across various domains. LLMs
like EcomGPT and FinPythia demonstrate the application of continual pre-training in e-commerce
-----
and finance, using data from the open web and Common Crawl to stay functional and relevant (Ma
et al., 2023; Xie et al., 2023b).
7 CONCLUSION
This paper presents TRAIT, a task-oriented in-domain data augmentation framework for continual
pre-training of large language models. The framework is divided into two parts. First, we select indomain data from general domain-agnostic corpora to augment the training set. The augmented indomain training corpus contain rich domain knowledge. Second, we generate task-oriented synthetic
passages. These passages contain guidance on how to apply domain knowledge to answer questions
about downstream tasks. We conduct extensive experiments by adapting LLMs to the advertisement and math domains. Experimental results validate the effectiveness of the proposed framework.
Specifically, on average, TRAIT improves the base LLM (without continual pre-training) by over
5% on both domains.
REFERENCES
Marah Abdin, Sam Ade Jacobs, Ammar Ahmad Awan, Jyoti Aneja, Ahmed Awadallah, Hany
Awadalla, Nguyen Bach, Amit Bahree, Arash Bakhtiari, Harkirat Behl, et al. Phi-3 technical report: A highly capable language model locally on your phone. arXiv preprint arXiv:2404.14219,
2024.
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4 technical
report. arXiv preprint arXiv:2303.08774, 2023.
Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. arXiv preprint arXiv:1905.13319, 2019.
Amittai Axelrod. Cynical selection of language model training data. _arXiv preprint_
_arXiv:1709.02279, 2017._
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model
for mathematics. arXiv preprint arXiv:2310.10631, 2023.
Sadra Bakhshandeh. Benchmarking medical large language models. Nature Reviews Bioengineer_ing, 1(8):543–543, 2023._
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Guiming Hardy Chen, Shunian Chen, Ziche Liu, Feng Jiang, and Benyou Wang. Humans or llms as
the judge? a study on judgement biases. arXiv preprint arXiv:2402.10669, 2024.
Daixuan Cheng, Shaohan Huang, and Furu Wei. Adapting large language models via reading comprehension. arXiv preprint arXiv:2309.09530, 2023.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. Journal of Machine Learning Research, 24(240):
1–113, 2023.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Together Computer. Redpajama: an open dataset for training large language models, 2023. URL
[https://github.com/togethercomputer/RedPajama-Data.](https://github.com/togethercomputer/RedPajama-Data)
-----
Yukun Feng, Patrick Xia, Benjamin Van Durme, and Jo˜ao Sedoc. Automatic document selection for
efficient encoder pretraining. arXiv preprint arXiv:2210.10951, 2022.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang, Minlie Huang, Nan Duan, Weizhu Chen,
et al. Tora: A tool-integrated reasoning agent for mathematical problem solving. arXiv preprint
_arXiv:2309.17452, 2023._
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C´esar Teodoro Mendes, Allie Del Giorno, Sivakanth
Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi, et al. Textbooks are
all you need. arXiv preprint arXiv:2306.11644, 2023.
Suchin Gururangan, Ana Marasovi´c, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey,
and Noah A Smith. Don’t stop pretraining: Adapt language models to domains and tasks. arXiv
_preprint arXiv:2004.10964, 2020._
Kenneth Heafield. Kenlm: Faster and smaller language model queries. In Proceedings of the sixth
_workshop on statistical machine translation, pp. 187–197, 2011._
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and
Jacob Steinhardt. Measuring massive multitask language understanding. _arXiv preprint_
_arXiv:2009.03300, 2020._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874, 2021._
Matthew Honnibal and Ines Montani. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing. To appear, 7(1):411–420, 2017.
Shengding Hu, Yuge Tu, Xu Han, Chaoqun He, Ganqu Cui, Xiang Long, Zhi Zheng, Yewei Fang,
Yuxiang Huang, Weilin Zhao, et al. Minicpm: Unveiling the potential of small language models
with scalable training strategies. arXiv preprint arXiv:2404.06395, 2024.
Yiming Huang, Xiao Liu, Yeyun Gong, Zhibin Gou, Yelong Shen, Nan Duan, and Weizhu Chen.
Key-point-driven data synthesis with its enhancement on mathematical reasoning. arXiv preprint
_arXiv:2403.02333, 2024._
Adam Ibrahim, Benjamin Th´erien, Kshitij Gupta, Mats L Richter, Quentin Anthony, Timoth´ee
Lesort, Eugene Belilovsky, and Irina Rish. Simple and scalable strategies to continually pre-train
large language models. arXiv preprint arXiv:2403.08763, 2024.
Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep
Dasigi, Joel Jang, David Wadden, Noah A Smith, Iz Beltagy, et al. Camels in a changing climate: Enhancing lm adaptation with tulu 2. arXiv preprint arXiv:2311.10702, 2023.
Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot,
Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al.
Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
Xisen Jin, Dejiao Zhang, Henghui Zhu, Wei Xiao, Shang-Wen Li, Xiaokai Wei, Andrew Arnold, and
Xiang Ren. Lifelong pretraining: Continually adapting language models to emerging corpora.
_arXiv preprint arXiv:2110.08534, 2021._
Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. Bag of tricks for efficient
text classification. In Proceedings of the 15th Conference of the European Chapter of the As_sociation for Computational Linguistics: Volume 2, Short Papers, pp. 427–431. Association for_
Computational Linguistics, April 2017.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi.
Mawps: A math word problem repository. In Proceedings of the 2016 conference of the north
_american chapter of the association for computational linguistics: human language technologies,_
pp. 1152–1157, 2016.
-----
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu, Joseph
Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large language model
serving with pagedattention. In Proceedings of the 29th Symposium on Operating Systems Prin_ciples, pp. 611–626, 2023._
Chen Li, Weiqi Wang, Jingcheng Hu, Yixuan Wei, Nanning Zheng, Han Hu, Zheng Zhang, and
Houwen Peng. Common 7b language models already possess strong math capabilities. arXiv
_preprint arXiv:2403.04706, 2024a._
Haoran Li, Qingxiu Dong, Zhengyang Tang, Chaojun Wang, Xingxing Zhang, Haoyang Huang,
Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang, et al. Synthetic data
(almost) from scratch: Generalized instruction tuning for language models. _arXiv preprint_
_arXiv:2402.13064, 2024b._
Yuanzhi Li, S´ebastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee.
Textbooks are all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463, 2023.
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan
Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step. arXiv preprint
_arXiv:2305.20050, 2023._
Ruibo Liu, Jerry Wei, Fangyu Liu, Chenglei Si, Yanzhe Zhang, Jinmeng Rao, Steven Zheng, Daiyi
Peng, Diyi Yang, Denny Zhou, et al. Best practices and lessons learned on synthetic data for
language models. arXiv preprint arXiv:2404.07503, 2024.
Daniel Loureiro, Francesco Barbieri, Leonardo Neves, Luis Espinosa Anke, and Jose CamachoCollados. Timelms: Diachronic language models from twitter. arXiv preprint arXiv:2202.03829,
2022.
Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter
Clark, and Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured
mathematical reasoning. arXiv preprint arXiv:2209.14610, 2022.
Shirong Ma, Shen Huang, Shulin Huang, Xiaobin Wang, Yangning Li, Hai-Tao Zheng, Pengjun Xie,
Fei Huang, and Yong Jiang. Ecomgpt-ct: Continual pre-training of e-commerce large language
models with semi-structured data. arXiv preprint arXiv:2312.15696, 2023.
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing
english math word problem solvers. arXiv preprint arXiv:2106.15772, 2021.
[OpenAI. Chatgpt: Optimizing language models for dialogue. OpenAI Blog, 2022. URL https:](https://openai.com/blog/chatgpt/)
[//openai.com/blog/chatgpt/.](https://openai.com/blog/chatgpt/)
Keiran Paster, Marco Dos Santos, Zhangir Azerbayev, and Jimmy Ba. Openwebmath: An open
dataset of high-quality mathematical web text, 2023.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple math
word problems? arXiv preprint arXiv:2103.07191, 2021.
Guilherme Penedo, Quentin Malartic, Daniel Hesslow, Ruxandra Cojocaru, Alessandro Cappelli,
Hamza Alobeidli, Baptiste Pannier, Ebtesam Almazrouei, and Julien Launay. The refinedweb
dataset for falcon llm: Outperforming curated corpora with web data, and web data only, 2023.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,
Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual
models from natural language supervision. In International conference on machine learning, pp.
8748–8763. PMLR, 2021.
Jeff Rasley, Samyam Rajbhandari, Olatunji Ruwase, and Yuxiong He. Deepspeed: System optimizations enable training deep learning models with over 100 billion parameters. In Proceedings
_of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining,_
pp. 3505–3506, 2020.
-----
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li, Y Wu,
and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open language
models. arXiv preprint arXiv:2402.03300, 2024.
Daria Soboleva, Faisal Al-Khateeb, Robert Myers, Jacob R Steeves, Joel Hestness, and Nolan Dey.
[SlimPajama: A 627B token cleaned and deduplicated version of RedPajama, 2023. URL https:](https://huggingface.co/datasets/cerebras/SlimPajama-627B)
[//huggingface.co/datasets/cerebras/SlimPajama-627B.](https://huggingface.co/datasets/cerebras/SlimPajama-627B)
Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur,
Ben Bogin, Khyathi Chandu, Jennifer Dumas, Yanai Elazar, et al. Dolma: An open corpus of
three trillion tokens for language model pretraining research. arXiv preprint arXiv:2402.00159,
2024.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model.
[https://github.com/tatsu-lab/stanford_alpaca, 2023.](https://github.com/tatsu-lab/stanford_alpaca)
Teknium. Openhermes 2.5: An open dataset of synthetic data for generalist llm assistants, 2023.
[URL https://huggingface.co/datasets/teknium/OpenHermes-2.5.](https://huggingface.co/datasets/teknium/OpenHermes-2.5)
Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Gitman.
Openmathinstruct-1: A 1.8 million math instruction tuning dataset. arXiv preprint arXiv: Arxiv_2402.10176, 2024._
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee
Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and
efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
Laurens Van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine
_learning research, 9(11), 2008._
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and
Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions.
_arXiv preprint arXiv:2212.10560, 2022._
Zengzhi Wang, Rui Xia, and Pengfei Liu. Generative ai for math: Part i – mathpile: A billion-tokenscale pretraining corpus for math. arXiv preprint arXiv:2312.17120, 2023.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
_neural information processing systems, 35:24824–24837, 2022._
Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzm´an,
Armand Joulin, and Edouard Grave. Ccnet: Extracting high quality monolingual datasets from
web crawl data. arXiv preprint arXiv:1911.00359, 2019.
Sang Michael Xie, Shibani Santurkar, Tengyu Ma, and Percy S Liang. Data selection for language
models via importance resampling. Advances in Neural Information Processing Systems, 36:
34201–34227, 2023a.
Yong Xie, Karan Aggarwal, and Aitzaz Ahmad. Efficient continual pre-training for building domain
specific large language models. arXiv preprint arXiv:2311.08545, 2023b.
Can Xu, Qingfeng Sun, Kai Zheng, Xiubo Geng, Pu Zhao, Jiazhan Feng, Chongyang Tao, and
Daxin Jiang. Wizardlm: Empowering large language models to follow complex instructions.
_arXiv preprint arXiv:2304.12244, 2023._
-----
A TRAINING DETAILS
In this study, we select Mistral-7B (Jiang et al., 2023) as our base model for domain-adaptive continual pre-training. We perform continual pre-training using DeepSpeed (Rasley et al., 2020) with its
ZeRO stage-1 optimizer, setting the batch size to 1M tokens and utilizing bf16 precision. We adopt
a Warmup-Stable-Decay (WSD) learning rate scheduler (Hu et al., 2024) with a maximum learning
rate of 2e-5, involving a 3% warm-up phase and an exponential decay phase during the last 10% of
the training process.
During fine-tuning, we train all the models on downstream tasks for 5 epochs, using a cosine learning
rate schedule with a maximum rate of 5e-6 and a 3% warm-up phase. For both few-shot and finetuned settings, we use vLLM (0.4.2) (Kwon et al., 2023) to accelerate inference.
B POTENTIAL RISKS OF TRAINING SOLELY ON IN-DOMAIN CORPUS
Training a model solely on an in-domain corpus that lacks strict filtering for quality and diversity
can lead to reduced effectiveness. While the original in-domain corpus helps to maintain relevance,
its quality may not match that of carefully curated pre-training corpora used in leading open-source
language models, such as Mistral-7B. (Jiang et al., 2023).
As demonstrated in Figure 5, using only our specific in-domain corpus for training adversely affects
the model’s ability to perform well in downstream generation tasks. Instead of providing accurate
responses, it produces repetitive and meaningless text. In contrast, the base model, not subjected to
this limited training, correctly generates appropriate responses.
However, by incorporating high-quality data from both retrieved and synthetic sources during training, we can preserve the model’s original capabilities. This approach aligns with findings (Ibrahim
et al., 2024) that suggest replay techniques during continual training help maintain a model’s pretrained skills.
### Prompt:
You are an expert in advertisement and your task is to generate a creative ad given its content.
Here is the content of the advertisement:
DocumentTitle: Flow Meters & Controllers — Ferntree Gully, VIC — Anri § Paragraph: Florite
manufacture a range of high quality microcomputer- based controllers suitable for any commercial,
industrial or research application such as: ; Flow Control – Power supply/readout/control units for
Mass Flow Meters (MFM’s)MFC’ programmable measurement units. Panel mount or bench top. Also
suitable for Pressure, temperature, level & any process control. call our team on 03 9752 4057. §
OtherPrimaryContent: en/ mfd ;mcmflow. § VisualTitle: Reliable Flow/ Controllers for Any Industry
§ MetaDescription: Controls supply flow meters & controllers for businesses Australia-wide. Call 03
9752 4057 or click here to learn more. §
The generated creative ad is:
### Base Model:
Flow Control: The Ultimate in Flow Metering. Experience the Elegance.
### Model Continual Trained on In-domain Unlabeled Data:
5’ x 2’ 3’ x 8’ 4’ 6’ x 10’ x 12’ 7’ x 16’ 9’ x 13’ x 14’ x 15’ x 18’ x 20’ x 23’ x 24’ x 27’ x 29’ x 30’
x 40’ x 50’ 11’ x 26’ x 28’ x 32’ x 36’ x 44’ 17’ x 60’ 22’ x 45’ x 70’ x 75’ x 80’ x 90’ x 105’ x 110’
x 120’ x 130’ x 100’ x 140’ x 160’ x 180’ x 200’ x 220’ x 240’ x 270’ x 300’ x 190’ x 250’ x 280’ x
260’ x 310’
### Model Trained on Our Curated Corpus:
100% Same-Day Delivery. Gophr: The Future of Last Mile Delivery.
Figure 5: An example of model degradation: the model, when continually pretrained on the original
in-domain corpus, exhibits repetitive and nonsensical text generation, unlike its performance with
the base model and TRAIT corpus, where this issue is absent.
-----
C EVALUATION DETAILS OF QUERY REWRITING
We generate ten query rewrites for each query and apply a quality filter to score the rewrites as
either Good or Bad, based on their alignment with the original user query’s intent. Density (Den) is
defined as the total average number of good query rewrites per query. For diversity (Div), we use an
off-the-shelf clustering algorithm to determine the average number of clusters per query based on
all query rewrites. Both metrics range from 0 to 10.
D EXAMPLES OF PROMPTS AND TRAINING DATA
In this section, we demonstrate examples of prompts we used along with all unlabeled and downstream fine-tuning data. Specifically, we present an example of the prompt for synthetic data in
Figure 6, an in-domain ad sample in Figure 7, a synthetic ad sample in Figure 8, a synthetic math
sample in Figure 9, retrieved ad and math samples in Figure 11 and 10, and the downstream finetuning data for the ads domain in Figure 12, 13, 14, 15, 16, 17 and 18.
**Prompt for Generating Task-Oriented Synthetic Data**
#### Structured Guideline for Passage Generation
#### Inputs Required:
- **Questions**: The question for each task.
#### Passage Generation Steps:
- Task specific: For each of the downstream tasks listed below, write one paragraph analyzing the
potential answers and the reasoning process associated with each. Please list the answer explicitly.
- Enlightenment: After writing paragraphs for all tasks, highlighting shared learnings across all tasks
and distinct problem solving tricks for each task, specifically the current problem.
#### Quality Considerations:
- Ensure coherence and logical flow throughout the passage.
- Maintain a concise and clear writing style, avoiding redundancy and focusing on summarizing key
points.
#### Input: Please return only the generated passage between tags ¡Passage¿¡/Passage¿ given below
input.
- {Task 1}: {Problem 1}
- {Task 2}: {Problem 2}
- {Task 3}: {Problem 3}
- {Task 4}: {Problem 4}
Figure 6: The prompt utilized for generating the task-oriented synthetic data.
-----
**Example for an unlabeled ad sample**
**Advertisement Title: Vision Reading Glasse**
**Advertisement Description: Save On Vision Reading Glasses. Everyday Low Prices!**
**Key words: hand held lighted magnifying glass**
**Document Title: Hand Held Lighted Magnifying Glass - Walmart.com**
**Heading: Results for ”Hand Held Lighted Magnifying Glass” (1000+) Options**
**Primary Content: Departments Brand Speed Subscription Availability Special Offers Customer Rat-**
ing Features Magnification Color Material Manufacturer Part Number Count Retailer Gifting Price
when purchased online Best seller Sponsored Magnifying Glass with 18 LED Light, Meromore 30x
Handheld Magnifier for Reading Save with Shipping, arrives in 2 days Meromore Magnifying Glass,
Lighted Magnifying Glass with 3 LED, Handheld Magnifier with 3x 45x Magnification for Kids Reading Meromore 30x Magnifying Glass with 18 LED Lights, Black MagniPros 3 Ultra Bright LED Lights
3X 4.5X 25X Power Handheld Reading Magnifying Glass with Light- Ideal for Reading Small Prints,
Map, Coins, Inspection and Jewelry Loupe Free shipping, arrives in 3+ days Magnifying Glass with
Light, Lighted Magnifying Glass, 5X Handheld Pocket Magnifier Small Illuminated Folding Hand
Held Lighted Magnifier for Reading Coins Hobby Travel - 45 Mm Diameter 2 options KEKOY Handheld Magnifying Glass with Light and Double Lens, Ultra Clear USB Charging Magnifying Glass for
Close Work with 6x 9x 15x Detachable Lens, Strong Maginifier for Reading RockDaMic Professional
Magnifying Glass with Light (3X / 45x) Large Lighted Handheld Glass Magnifier Lupa for Reading,
Jewelry, Coins, Stamps, Fine Print - Strongest Magnify for Kids & Seniors 2 sizes Large 4.35in Magnifying Glass 10X 35X with Light and Stand, Handheld Standing LED Illuminated Magnifier, Folding
Reading Magnifying Glass with for Seniors Read, Cross Stitch, Map, Jewelry Meromore Magnifying
Glass.
Figure 7: An example of an unlabeled in-domain sample from the ads domain.
-----
**Example for Syhthetic Ads Data**
For the Query Rewrites (QR) task, the potential answers revolve around creating variations of the user
query ”machu picchu tour packages luxury.” Possible rewrites could include ”luxury Machu Picchu
travel packages,” ”high-end Machu Picchu tours,” and ”premium tour packages to Machu Picchu.”
Each rewrite aims to capture the essence of the user’s request while incorporating keywords that might
improve the ad’s visibility and relevance. The reasoning process involves ensuring that the rewrites
maintain the focus on luxury and Machu Picchu, aligning with the user’s intent and the ad’s offer.
In the Ad Copy Generation (AG) task, the goal is to create a complete ad copy directly from the
landing page content. This involves synthesizing the key elements of the landing page into a compelling ad. For example: ”Discover High-End Machu Picchu Tours with Inca Expert. Since 1998,
we’ve crafted bespoke travel experiences with the guidance of our award-winning specialists, including Emmy-winning filmmaker Kim MacQuarrie and World’s Top Chef Virgilio Martinez. Experience
Peru with private guides and boutique services. Visit us at https://www.incaexpert.com.” The reasoning here includes selecting the most impressive and relevant details from the landing page to attract
potential customers while maintaining a concise and engaging format.
For the Title Rewriting (TR) task, the focus is on refining the ad title to better suit the user’s needs.
Given the user’s query, a more targeted title could be ”Luxury Machu Picchu Tours by Peru Experts”
or ”Top-Rated High-End Machu Picchu Travel.” The aim is to enhance engagement by clearly communicating the luxury aspect and the expertise of the travel firm, making it more appealing and relevant
to the user’s search.
In the Query-Ad Copy Relevance (QAC) task, the relevance of the user’s query to the ad copy must
be examined. The user query ”machu picchu tour packages luxury” directly aligns with the ad copy,
which promotes ”High-End Machu Picchu Tours” and highlights boutique service and private guided
tours. The ad copy effectively addresses the user’s desire for a luxury travel experience, ensuring high
relevance. The reasoning involves matching keywords and themes from the query with those in the ad
copy to ensure they resonate well.
The Query-Landing Page Relevance (QLP) task assesses the relevance between the user’s query and
the advertisement’s landing page content. The landing page details Inca Expert’s specialization in boutique, high-end travel to Peru, featuring notable specialists and personalized services, which strongly
align with the user’s search for luxury Machu Picchu tour packages. The primary content supports the
luxury and personalized experience sought by the user. The reasoning here involves confirming that
the landing page provides substantial and relevant information that fulfills the user’s query.
In conclusion, the key points across the tasks highlight the importance of aligning ad copy and titles
with the user’s intent for luxury travel, ensuring high relevance and engagement. Each task focuses
on different aspects: Query Rewrites (QR) focusing on synonyms to luxury; Ad Copy Generation
(AG) synthesizes key landing page information like personalized services provided by award-winning
specialists in flim-making and cooking, into compelling ads; and Title Rewriting (TR) enhances engagement by clearly communicating the luxury aspect and the expertise of the travel firm. Query-Ad
Copy Relevance (QAC) and Query-Landing Page Relevance (QLP) ensure consistency and alignment
between user expectations, ad content, and landing page details for the high-quality travel experiences
provided. These tasks collectively improve the user’s experience by ensuring coherence and relevance
across all touchpoints, ultimately driving engagement and satisfaction.
Figure 8: An example of generated synthetic data in ads domain.
-----
**Example for Syhthetic Math Data**
###Task 1. For the below question: In the undersea world of Mathlantica, Aquatic Mathematicians
communicate through positive integer pairs (a, b) where b¡a¡150. However, they can only understand
the messages if both a/b and (a+2)/(b+2) are integers. How many such communicable pairs exist? As
an additional challenge, they ask you, a surface-dweller, to add up the numbers 212, 321 and 123,
which they inform you are in base 10, and provide the sum in base 10. Can you solve these underwater
conundrums?,Enclose the final answer using \boxed{}. The plausible answer and explantion could
be: For the first part, since b|a, b + 2|a + 2, then gcd (b, a) = b and gcd (b + 2, a + 2) = b + 2. By the
Euclidean algorithm, these can be rewritten respectively as gcd (b, a _−_ _b) = b and gcd (b_ +2, a _−_ _b) =_
_b_ +2, which implies that both b, b +2|a _−_ _b. Also, as gcd (b, b_ +2) = 1, it follows that b(b +2)|a _−_ _b._
[1],Thus, for a given value offollows that there are _b150(b+2)−b_ _b, we need the number of multiples ofsatisfactory positive integers for all integers b(b + 2) b_ from150. The answer is, 0 to 150 − _b. It_
_≤_
148 j k
150 _b_
_−_ = 74 + 24 + 12 + 6 + 4 + 3 + 2 + 1 + 1 = 127.
_b(b + 2)_
_b=1_
X
For the second part, adding 21210, 32110 and 12310 together in base 10 gives us 65610.,,So, the final
answer is 127 + 656 = 783 .,So the answer is 783 .
###Task 2. For the below question: In a space station, an astronaut is studying a parallelogram-shaped
window named WXY Z. The angle WXY is four times the angle XY Z. Inside the window, there
is a right circular truncated cone-shaped object. The smaller base of the object is on XY and the
larger base is on WZ. The height of the object is equal to the length WY . The radius of the larger
base is 10 cm and the radius of the smaller base is 5 cm. Can you help the astronaut calculate the
volume of the object in cm[3]?,Enclose the final answer using \boxed{}. The plausible answer and
explantion could be: Firstly, we need to find the height of the truncated cone. Since the height is
equal to the length WY , we need to find WY . To do this, we use the fact that the measure of angle
_WXY is 4 times the measure of angle XY Z. Because WZ ∥_ _XY , we have ∠W + ∠X = 180[◦]._
Since ∠W = 4∠X, we have 4∠X + ∠X = 180[◦], so 5∠X = 180[◦] and ∠X = 36[◦]. Therefore,
∠W = 180[◦] _−_ ∠X = 144[◦]. Since WXY Z is a parallelogram, ∠WY Z = ∠X = 36[◦]. Therefore,
triangle WY Z is a right-angled triangle and WY = 2WZ = 2 ∗ 10 = 20 cm.,,We ”complete” the
truncated cone by adding a smaller, similar cone atop the cut, forming a large cone. Since the small
and large cone are similar, we have x/5 = (x + 20)/10; solving yields x = 20. Hence the small
cone has radius 5, height 20, and volume (1/3)π(5[2])(20) = 500/3π and the large cone has radius 10,
height 40, and volume (1/3)π(10[2])(40) = 4000/3π. The frustum’s volume is the difference of these
two volumes, or 4000/3π − 500/3π = 3500/3π cubic cm.,So the answer is 3500/3π .
###Task 3. For the below question: If f (x) = 3x + 1, what is f (3)? The plausible answer and
explantion could be: We have f (3) = 3 ∗ 3 + 1 = 10 .
###Task 4. For the below question: In a music festival, there are certain speakers that play a specific
song every 20 minutes. If a speaker played the song 5 minutes ago and will play it again in 30 minutes,
how many such speakers are there that have been playing for less than 50 minutes? In the same
festival, two bands are performing. The first band’s performance progresses at a rate of 2 songs per
minute, and the second band’s performance progresses at a rate of 4 songs per minute. Both bands
started performing from the same point (5,10) minutes ago. How far apart are the points where these
bands finish their performances?,Enclose the final answer using \boxed{}. The plausible answer and
explantion could be: There are two parts in this question.
-----
Part 1: Adding −5 to both sides of x + 5 ≡ 30 (mod 20) gives x ≡ 30 − 5 (mod 20). We find
30 − 5 = 25 ≡ 5 (mod 20), so x ≡ 5 (mod 20). Thus the smallest positive integer that satisfies the
given congruence is x = 5, and all the other solutions can be obtained by adding a multiple of 20 to 5.
The next two such integers are 25 and 45. Since 45 is less than 50, there are 3 integers less than 50
that satisfy the congruence x + 5 ≡ 30 (mod 20).
Part 2: The two lines have equations
_y −_ 10 = 2(x − 5) and _y −_ 10 = 4(x − 5).
The x-intercepts, obtained by setting y = 0 in the respective equations, are 5 and 2.5. The distance
between the points (5, 0) and (2.5, 0) is 2.5 .,So, for the first sub-question, the answer is 3 .,For the
sub-question 2, the answer is 2.5 .
In analyzing the provided math tasks, we can identify several shared problem-solving techniques such
as mathematical induction, unit transformation, and basic algorithmics. For instance, in Math task
1, the problem-solving involves a combination of number theory (specifically divisibility and the Euclidean algorithm) and arithmetic operations in different bases. Math task 2 requires geometric reasoning to find the dimensions of the shapes involved and then applying the formula for the volume of
a truncated cone. Math task 3 is a straightforward application of function evaluation, while Math task
4 combines modular arithmetic with linear equations to solve the problem.
Each task also exhibits unique problem-solving techniques. Math task 1 uses number theory to find
communicable pairs and base conversion for arithmetic operations. Math task 2 involves geometric
properties of parallelograms and right-angled triangles, as well as similarity of shapes to calculate
volume. Math task 3, being the simplest, only requires direct substitution in a linear function. Math
task 4 uses modular arithmetic to determine the number of speakers and the concept of linear equations
to find the distance between two points.
Figure 9: An example of generated synthetic data in math domain.
**Example for Retrieved Ads**
Acadeos is the best online learning sites In USA to achieve your professional goals through various
USA e courses. Join our Online Academy US now. Learn US Abacus online through our online
learning platforms in Acadeos. We use virtual Abacus USA for students to get better Abacus learning
online USA.
Learn Alphabet Phonics USA through our online Online Education In United States in Acadeos. You
can study phonics in your home by learning online. Improve your child’s mathematical skill by employing your kid in Vedic maths online classes In USA. Join US Vedic Math Online Course In Acadeos
now.
Get Dissertation Help Online In US from verified experts and buy a thesis paper In USA online with
high quality in our Acadeos in an effective way. Improve your child’s learning skills through story
telling USA from our virtual learning academy in United States by joining your kids in Acadeos now.
Reach online science tutor In US and get best tutoring services United States. Connect with our
experienced professionals in Acadeos to study online.
Access Skype math tutor US to get a best Online Mathematics Tutor In US for your kid to gain best
mathematical knowledge from the best Math Tutoring USA.
Enroll your kid in our virtual schools USA to get virtual learning In US from our professionals In
Acadeos to get online homework help for assignments.
Acadeos provides the best maths tutors online to help you with math homeworks. We offer a customised tuition plans and a student-centric approach.
Figure 10: Examples of retrieved ads unlabeled samples.
-----
**Example for Retrieved Math**
Exponential Growth Worksheet
- Page 1
1. If a quantity increases by the same percent r in each unit of time t, then the quantity is .
a. growing exponentially b. decreasing exponentially c. constant
### Solution:
If a quantity is increasing by the same percent r in each unit of time t, then the quantity is growing
exponentially.
2. Which of the following equations represents exponential growth?
a. y = r(1 + r) b. y = r(1 + C) c. y = Cr d. y = C(1 + r)t
### Solution:
Exponential growth can be modeled by the equation y = C (1 + r)t, where C is the initial amount, r is
the growth rate and t is the time.
3. The expression (1 + r) is called in the equation y = C(1 + r)t.
a. decay factor b. growth factor c. decay and growth factors d. exponent
### Solution:
The expression (1 + r), in the equation y = C(1 + r)t is called growth factor.
4. The average length of a person’s hair at birth is 0.36 inches. The length of the hair increases by
about 10% each day during the first six weeks. Choose a model that represents the average length of
the hair during the first six weeks.
a. y = 0.36(1.1)t b. y = -0.36(1.1)t c. y = 1.1(0.36)t d. None of the above
### Solution:
Let y be the length of the hair during the first six weeks and t be the number of days.
y = C(1 + r)t [Write exponential growth model.]
= 0.36(1 + 0.10)t [Substitute C = 0.36 and r = 0.10.]
= 0.36(1.1)t
The model for the length of the hair in first six weeks is y = 0.36(1.1)t.
5. A bank pays 4% interest compounded yearly on a deposit of $900. What will be the balance in the
account after 7 years?
a. $1288 b. $1088 c. $2376 d. $1188
### Solution: The exponential growth model is given by the equation, y = P(1 + r)t, where P is the
initial amount, r is the growth rate and t is the number of years. = 900(1 + 0.04)7 Balance after 7 years
[Substitute P = 900, t = 7 and r = 0.04.] = 900(1.04)7 = 900 x 1.32 = 1188 [Simplify.] The account
balance after 7 years will be about$1188.
6. There are 20 bears in a zoo. What will be their population after 3 years, if the population doubles
each year?
a. 160 bears b. 260 bears c. 60 bears d. 210 bears
### Solution:
The exponential growth model is given by the equation, y = C(1 + r)t, where C is the initial number,
(1 + r) is the growth factor and t is the number of years.
Population after 3 years = 20(2)3 [Substitute C = 20, 1 + r = 2 and t = 3.]
= 160 [Simplify.]
There will be 160 bears after 3 years.
Figure 11: Examples of retrieved Math unlabeled samples.
-----
**1. Example for Query-Ad Copy Relevance (QAC)**
>>> Prompt:
You are an expert in advertisement and your task is to evaluate the relevance between a user input
query and an advertisement.
Here are some attributes for the advertisment:
### begin advertisement
Actual Advertisement Title: Plumbers Near Me - Enter Your Zip Code To Start - View Quotes In
Under 24 Hours
Actual Advertisement Description: Explore Professional Plumbers Who Specialize In Your Project
Type. Get Up To 4 Estimates. Receive Accurate Quotes For Your Plumbing Project, So You Can
Easily Save Time And Money.
### end advertisement
The user query is: kaufmann plumbing palm springs
Please evaluate that whether the advertisment is relavant to the user query. You can only answer with
True or False.
The answer is (True or False):
>>> Response:
False
Figure 12: Examples of the prompt and the labeled responses for the QAC task.
**2. Example for Query-Landing Page Relevance (QLP)**
>>> Prompt:
You are an expert in advertisement and your task is to evaluate the relevance between a user input
query and an advertisement.
Here are some attributes for the advertisement:
### begin advertisement
Document Title: Verified Camping World Promo Code & Coupon Code August 2022
Visual Title: Camping World Promo Code & Coupon Code July 2022
Heading: Submit Coupon for Camping World Camping World Stats Camping World Top Coupon
Codes and Offers Get Latest, Vitrified 30% Off Promo Code, Don’t Pay Full Price! ADVERTISEMENT Save Your Money, Get 30% Off Coupon Code Big Deal Today, Up To 60% Offer Flash Sale!
Up To 70% Off Coupon Code Up To 50% Off Today, Save Your Money Now! Camping World 10%
Off Storewide Up To 46% On RV Sales Up To $500 Off Sleep Number Beds Up To 30% Off Stromberg
RV Gear Take 15% Off Your Online Purchase Camping World Gain Up To 45% Off Awnings, Sunblockers & Replacement Fabrics Up To $250 Off Refrigerators, Washers & Dryers
Primary Content No Title No Heading: Continue to campingworld.com Rate 4.1 / 153 Votes With
the advancement of technology, everybody began to pursue high-quality development, and coupomuscode.com is here to assist everyone in achieving this objective in a more comfortable manner. For
your convenience, coupomuscode.com has carefully picked Camping World and coupons. The perfect
bargain is when you can acquire not just the greatest goods, but also the greatest price, allowing you to
use the money you save into something else. Check out the best Camping World coupons and save up
to 80%. The best choice of today is Get Latest, Vitrified 30% Off Promo Code, Don’t Pay Full Price!,
don’t miss it. Total Promo Codes Deals Max Discount Last Updated July 2022 Get Latest, Vitrified
30% Off Promo Code, Don’t Pay Full Price! Save Your Money, Get 30% Off Coupon Code
### end advertisement
The user query is: camping world
Please evaluate that whether the advertisement is relavant to the user query. You can only answer with
True or False. The answer is (True or False):
>>> Response:
True
Figure 13: Examples of the prompt and the labeled responses for the QLP task.
-----
**3. Example for Query Rewriting (QR)**
>>> Prompt:
You are an expert in advertisement.
Given a specific search query A, generate 10 potential query B’s that users might use to express the
same intent as query A.
The goal is to craft diverse yet relevant variations of query B that could match the original intent of
query A.
Focus on avoiding repetitions and ensuring that each variation of Query B is a distinctly different way
of expressing the intent of Query A.
Output in a valid python list format and don’t generate any other text. Now perform the tasks for the
following input.
input: Query A: redfall release xbox
output: Query B:
>>> Response:
[”when is redfall coming out on xbox”, ”redfall xbox launch date”, ”xbox redfall game release”, ”redfall availability on xbox”, ”how to get redfall on xbox”, ”redfall xbox debut”, ”xbox release of redfall
game”, ”redfall game for xbox release time”, ”redfall on xbox launch details”, ”redfall xbox edition
release info”]
Figure 14: Examples of the prompt and the labeled responses for the QR tasks.
**4. Example for Ad Copy Generation (AG)**
>>> Prompt:
You are an expert in advertisement and your task is to craft the ad copy for an advertisement based on
its content. The ad copy should provide a concise description of the advertisement that entices users
before they click into the ad page.
Here is the content of the advertisement:
DocumentTitle: ENCINITAS Florist — Flower Delivery by ENCINITAS FLORIST § Heading:
Flower Delivery in ENCINITAS, CA ; Browse Designs See all flowers ; Hawaiian Hugs and Kisses ;
Always On My Mind™Flower Bouquet ; Victorian Romance ; Over the Moon Bouquet ; The FTD
Always Adored Bouquet ; Coastal Breeze ; Sun-Sational Sunflowersˆa ; Calla lillies ; The Stunning
Beauty™Bouquet by FTD - VASE INCLUDED ; Teleflora’s Desert Sunrise Bouquet ; New Sensations ; TULIPS SPECIAL ; ORCHIDS AND SUCCULENT Rustic ; Vintage Garden ; All White Arce
; Fields of Europe for Summer ; § OtherPrimaryContent: Next-day Delivery § MetaDescription: Best
ENCINITAS florist. ENCINITAS FLORIST offers fresh flower delivery ENCINITAS. Save money by
sending flowers directly with a Local Florist. § FirstGood: For an accessible ADA experience, please
visit our Marketplace. Visit our Accessibility page for further assistance or to learn more about our
efforts.
The generated ad copy is:
>>> Response:
Buy Flowers Online -TitleSep- Local Flower Source -Desc- Flower Shop in Encinitas Say it With
Flowers!
Figure 15: Examples of the prompt and the labeled responses for the AG task.
-----
**5. Example for Ad Description Generation (DG)**
>>> Prompt:
You are an expert in advertisement and your task is to generate a creative ad given its content.
Here is the content of the advertisement:
DocumentTitle: Fertilizer Granulator - Organic and NPK fertilizer making machines § Heading: Cat
Litter Disc Fertilizer Granulator ; New Type Organic Fertilizer Granulator ; Organic Fertilizer Pin
Granulator ; Rotary Drum Granulator ; Double roller extrusion granulator § OtherPrimaryContent:
Home ¿ Fertilizer Machines ¿ Fertilizer Granulator ; Check details § FirstGood: Shunxin News. Shipping disc granulator fertilizer production line to America. BB and NPK fertilizer production lines
shipping to Nigeria. Deliver 10 t/h granular fertilizer produc §
The generated creative ad is:
>>> Response:
From Trash to Treasure: Granulator Machine for Fertilizers
Figure 16: Examples of the prompt and the labeled responses for the DG task.
**6. Example for Title Generation (TG)**
>>> Prompt:
You are an expert in advertisement and your task is to craft the title for an advertisement based on
its content. The ad copy should provide a concise description of the advertisement that entices users
before they click into the ad page.
Here is the content of the advertisement:
DocumentTitle: Four Season Rain Gutters (1145223) § Paragraph: Four Seasons Rain Gutters has
provided courteous, reliable service and high-quality gutters for customers in all of San Diego County,
with an excellent history of customer satisfaction. We use only the best aluminum, copper, steel gutters, no matter how big or small the job is,ve got the experience to get the job done. Also, we are
dedicated to providing our customers with quality workmanship, professionalism, reliability, punctuality, clean work, competitive pricing. Call us. § Heading: Business Information ; Hours of Operation
§ OtherPrimaryContent: Call us today at ; Escondido, CA 92027 ; Get Directions ; Phone ; Website ;
https:www.fourseasonsraingutters.com/ §
The generated title is:
>>> Response:
High Quality Gutters Available
Figure 17: Examples of the prompt and the labeled responses for the TG task.
**7. Example for Title Rewriting (TR)**
>>> Prompt:
You are an expert in advertisement and your task is to rewrite compelling titles that resonates with
user’s query and enticits them to click on the ad, given the orginal titles and the user query.
The user query is: best medical supply stores near me
The original advertisement titles are: Home Health, Hospitals and More - Hopkins Medical Products Hopkins Medical Supplies
The output rewritten advertisement titles are:
>>> Response:
Home Health - Hospital Supplies Near Me - Hopkins Medical Products
Figure 18: Examples of the prompt and the labeled responses for the TR task.
-----
| [
"Xiao, Liang",
"Xinyu, Hu",
"Yeyun, Gong",
"Simiao, Zuo",
"Qiang, Lou",
"Yi, Liu",
"Shao-Lun, Huang",
"Jian, Jiao"
] | 2024-06-24T00:00:00 | null | false | 1 | 0 | null | http://arxiv.org/abs/2406.16694 | https://arxiv.org/abs/2406.16694 | https://www.semanticscholar.org/paper/a06237d612ae0d53c0c591620c238bd8c2a46158 |
Teaching Small Language Models to Reason for Knowledge-Intensive Multi-Hop Question Answering | Large Language Models (LLMs) can teach small language models (SLMs) to solve complex reasoning tasks (e.g., mathematical question answering) by Chain-of-thought Distillation (CoTD). Specifically, CoTD fine-tunes SLMs by utilizing rationales generated from LLMs such as ChatGPT. However, CoTD has certain limitations that make it unsuitable for knowledge-intensive multi-hop question answering: 1) SLMs have a very limited capacity in memorizing required knowledge compared to LLMs. 2) SLMs do not possess the same powerful integrated abilities in question understanding and knowledge reasoning as LLMs. To address the above limitations, we introduce Decompose-and-Response Distillation (D&R Distillation), which distills two student models, namely Decomposer and Responser separately. The two models solve a knowledge-intensive multi-hop question through an interactive process of asking and answering subquestions. Our method offers two advantages: 1) SLMs have the capability to access external knowledge to address subquestions, which provides more comprehensive knowledge for multi-hop questions. 2) By employing simpler subquestions instead of complex CoT reasoning, SLMs effectively mitigate task complexity and decrease data prerequisites. Experimental results on three knowledge-intensive multi-hop question answering datasets demonstrate that D&R Distillation can surpass previous CoTD methods, even with much less training data. | null | # Teaching Small Language Models to Reason for Knowledge-Intensive Multi-Hop Question Answering
**Xiang Li[1][,][2], Shizhu He[1][,][2][*], Fangyu Lei[1][,][2], Jun Yang[3], Tianhuang Su[3],**
**Kang Liu[1][,][2][,][4], Jun Zhao[1][,][2]**
1The Laboratory of Cognition and Decision Intelligence for Complex Systems,
Institute of Automation, Chinese Academy of Sciences
2
School of Artificial Intelligence, University of Chinese Academy of Sciences
3
Guangdong OPPO Mobile Telecommunications Corp.,Ltd.
4
Shanghai Artificial Intelligence Laboratory
{lixiang2022, leifangyu2022}@ia.ac.cn {shizhu.he, kliu, jzhao}@nlpr.ia.ac.cn
{yangjun2, sutianhuang}@oppo.com
**Abstract**
Large Language Models (LLMs) can teach
small language models (SLMs) to solve complex reasoning tasks (e.g., mathematical question answering) by Chain-of-thought Distillation (CoTD). Specifically, CoTD fine-tunes
SLMs by utilizing rationales generated from
LLMs such as ChatGPT. However, CoTD has
certain limitations that make it unsuitable for
knowledge-intensive multi-hop question answering: 1) SLMs have a very limited capacity
in memorizing required knowledge compared
to LLMs. 2) SLMs do not possess the same
powerful integrated abilities in question understanding and knowledge reasoning as LLMs.
To address the above limitations, we introduce
Decompose-and-Response Distillation (D&R
Distillation), which distills two student models, namely Decomposer and Responser separately. The two models solve a knowledgeintensive multi-hop question through an interactive process of asking and answering subquestions. Our method offers two advantages:
1) SLMs have the capability to access external knowledge to address subquestions, which
provides more comprehensive knowledge for
multi-hop questions. 2) By employing simpler
subquestions instead of complex CoT reasoning, SLMs effectively mitigate task complexity
and decrease data prerequisites. Experimental
results on three knowledge-intensive multi-hop
question answering datasets demonstrate that
D&R Distillation can surpass previous CoTD
methods, even with much less training data[1].
**1** **Introduction**
Large language models are capable of answering
complex questions (e.g., mathematical questions)
*Corresponding author
[1Our code will be available at https://github.com/](https://github.com/Xiang-Li-oss/D-R-Distillation)
[Xiang-Li-oss/D-R-Distillation](https://github.com/Xiang-Li-oss/D-R-Distillation)
Figure 1: A comparison of D&R Distillation (ours) and
CoTD (Ho et al., 2023). CoTD teaches one SLM to output all intermediate reasoning steps and the final answer
at once, struggling on knowledge-intensive multi-hop
questions. D&R Distillation teaches two SLMs to interact by asking and answering subquestions, leading them
to collectively reach the final answer.
by generating step-by-step natural language reasoning paths, namely Chains-of-thoughts (CoTs)
(Wei et al., 2022). However, the ability to solve
complex reasoning tasks through CoT prompting is
considered an emergence that appears in very large
models with at least tens of billions of parameters
(Wei et al., 2022), such as PaLM of 540B (Chowdhery et al., 2022), GPT-3 of 175B (Brown et al.,
2020), and LLaMA of 70B (Touvron et al., 2023).
Recent works have proposed to transfer the rea
7804
-----
soning ability of large models to small language
models (SLMs) through Chain-of-thought Distillation (CoTD) (Ho et al., 2023; Magister et al., 2023;
Li et al., 2023a). Specifically, as shown in the upper
part of Figure 1, they leverage the LLM (e.g., ChatGPT) to generate high-quality rationales and finetune a SLM with rationale-augmented questionanswer pairs. CoTD has successfully enhanced
SLMs’ reasoning ability on many reasoning tasks,
such as arithmetic reasoning (Cobbe et al., 2021),
commonsense reasoning (Talmor et al., 2019), and
symbolic reasoning (Wei et al., 2022).
However, previous CoTD works did not effectively address knowledge-intensive reasoning tasks
such as multi-hop question answering (Petroni
et al., 2021; Trivedi et al., 2023). Unlike arithmetic reasoning and commonsense reasoning,
knowledge-intensive reasoning tasks pose greater
challenges due to their requirement for both background knowledge and the ability to perform multistep reasoning. CoTD has two limitations that render it unsuitable for teaching SLM to reason over
knowledge-intensive multi-hop question answering.
1) Knowledge Memorization Gap between
**LLMs and SLMs. Unlike LLMs, which store**
vast amounts of knowledge within their parameters, SLMs are limited in their capacity to memorize the necessary knowledge to solve the tasks
due to their small number of parameters. Besides,
simply augmenting SLM with a one-step retrievalaugmentation strategy (Kang et al., 2023; Zhang
et al., 2023) is also suboptimal for multi-hop questions. For such questions, relevant knowledge often
needs to be retrieved after intermediate reasoning
has concluded, as it may not be explicitly mentioned in the question. For example, consider the
question illustrated in Figure 1, one must first infer
that the eastern sector of Colorado extends
into the High Plains, and then perform further retrieval to obtain evidence pointing to the
evaluation range.
2) Difficulty in Distilling Integrated Subtasks.
In contrast to arithmetic reasoning, which typically involves applying predefined formulas or algorithms, or commonsense reasoning, which relies
on general knowledge and intuition. Solving a
knowledge-intensive multi-hop question via chainof-thought reasoning potentially involves a collection of multiple subtasks, including complex question decomposition, knowledge association, and
knowledge reasoning (Zheng et al., 2023). How
ever, it is highly challenging for an individual SLM
to simultaneously acquire all these integrated capabilities, which leads to the CoTD methods requiring
more training data and being inefficient.
To address the aforementioned limitations, motivated by question decomposition for answering
complex questions (Han et al., 2023; Press et al.,
2023), we propose a novel method to teach SLMs
to reason for knowledge-intensive multi-hop questions, namely Deompose-and-Response Distillation (D&R Distillation, as shown in Figure 1).
Specifically, we propose to prompt LLM in a Self_Ask-Self-Ans strategy by iteratively asking subques-_
tions and responding with intermediate answers.
Then we separately distill two student models,
namely Decomposer and Responser. The Deom_poser is responsible for asking subquestions and_
determining the final answer based on current interaction history. The Responser is responsible
for answering subquestions by leveraging relevant
background knowledge obtained from an external
knowledge base. By formatting the reasoning process as a sequence of generating subquestions and
intermediate answers, these two student models effectively address knowledge-intensive multi-hop
questions within an interactive framework.
Compared with previous Chain-of-thought Distillation methods, our method offers two notable
advantages: 1) By reasoning in an interactive manner, our method allows student models to utilize
external knowledge with each retrieval focusing
on a subquestion. Compared to previous works
relying solely on parameter knowledge or one-step
retrieval augmentation (Ho et al., 2023; Kang et al.,
2023), our method provides a more comprehensive
collection of relevant knowledge required to answer
multi-hop questions. 2) We transform the process
of solving a reasoning question into two interrelated and decoupled subtasks: decomposing the
complex question and solving a series of simpler
subquestions. D&R Distillation effectively reduces
the overall task difficulty while significantly reducing the amount of data required for distillation.
We evaluate the effectiveness of our method
on three knowledge-intensive multi-hop question
answering datasets: HotpotQA, StrategyQA, and
2WikiMultiHopQA. Experimental results demonstrate that D&R distillation significantly improves
the knowledge-intensive reasoning ability of SLMs
with approximately 1/10 of the full training data.
Notably, our method with two 220M SLMs (T5base) outperforms Chain-of-thought Prompting
7805
-----
with an 11B (50 times larger) LLM (Flan-T5-XXL)
on HotpotQA and 2WikiMultiHopQA.
**2** **Related Work**
**Chain-of-Thought** prompting (Wei et al., 2022)
significantly enhances the reasoning capacities of
large language models by augmenting few-shot examples with detailed reasoning steps. Recent works
have further refined CoT through verification (Li
et al., 2023b), question decomposition (Zhou et al.,
2023), and path sampling (Wang et al., 2023; Yao
et al., 2023). However, these aforementioned studies primarily concentrate on enhancing the reasoning capabilities of LLMs, neglecting the necessity
to improve the reasoning abilities of smaller language models (<1B).
**Chain-of-thought Distillation** have been proposed to distill the CoT reasoning ability of LLMs
into SLMs (Ho et al., 2023; Fu et al., 2023; Magister et al., 2023; Hsieh et al., 2023), because the
CoT reasoning ability is considered as an emergent
ability which enables LLM to generate intermediate reasoning steps with CoT prompting (Wei et al.,
2022) (e.g. Let’s think step by step). To augment
Chain-of-thought Distillation (CoTD) with external knowledge, (Kang et al., 2023) augment SLMs
with documents retrieved by a one-step retriever
from the external knowledge base. However, CoTD
is less effective for knowledge-intensive multi-hop
question answering tasks (Petroni et al., 2021),
where both factual knowledge and multi-hop reasoning are important to generate accurate rationale.
In this paper, we propose to distill two student
models and solve a knowledge-intensive multi-hop
question by facilitating an interactive process of
asking and answering subquestions between the
two student models.
**Question Decomposition** (Kalyanpur et al.,
2012; Patel et al., 2022) has long been a crucial
technique for understanding and solving complex
questions. Recent works also utilize question decomposition to improve the reasoning ability of
LLMs. (Zhou et al., 2023) enhances the CoT reasoning ability of LLMs by decomposing questions
into subquestions and sequentially solving subquestions. (Press et al., 2023) explicitly asks LLM itself
follow-up subquestions before answering the original question and answers subquestions with an external search engine. (Shridhar et al., 2023) learns
a semantic decomposition of the original question
into a sequence of subquestions and uses it to train
two models designated for question decomposition
and resolution. Unlike the aforementioned works,
we focus on teaching small language models to
reasoning for knowledge-intensive multi-hop questions with LLM generations. We achieve this by
distilling two student models to interactively ask
and answer subquestions.
**3** **Method**
In this section, we provide a detailed description
of our method. As illustrated in Figure 2, D&R
Distillation can be divided into three stages:
**1) Self-Ask-Self-Ans Prompting: We prompt a**
very large language model (e.g., ChatGPT) to generate D&R Distillation samples, preparing datasets
for training student models.
**2) Decomposer and Responser Training: We**
distill two student models (e.g., T5) with D&R
Distillation samples obtained by stage 1).
**3) Decomposer and Responser Interaction:**
The Decomposer and the Responser address a
knowledge-intensive multi-hop question through
an interactive process of generating subquestions
and obtaining intermediate answers.
**3.1** **_Self-Ask-Self-Ans Prompting_**
In this stage, a teacher model (LLM) is prompted
with Self-Ask-Self-Ans prompting to generate D&R
Distillation samples[2]. Specifically, the teacher
model solves a knowledge-intensive multi-hop
question by iteratively asking itself subquestions
and providing intermediate answers. Consider a
standard sample Si consisting of a question qi and
its golden answer ai. The teacher model serves as
a Decomposer and a Responser alternatively. At
the k-th step, when serving as a Decomposer, the
teacher model decide to continue asking a subquestion s[k]i [or predicting the final answer][ a]i[k] [based on]
interaction history:
_H =< qi, s[1]i_ _[, r]i[1][, ..., s]i[k][−][1], ri[k][−][1]_ _>_
where s[t]i [and][ r]i[t] [are the subquestion and the inter-]
mediate answer of the t-th step. When serving as a
_Responser, the teacher model answers the subques-_
tion s[k]i [proposed before with retrieved passages:]
_Pi[k]_ [=][ topK][(][R][(][p][|][s]i[k][;][ D][)][, K][)]
_ri[k]_ [=][ LLM] [(][P][ k]i _[, s]i[k][)]_
2Prompting examples for the teacher model can be found
in Appendix B
7806
-----
Figure 2: Overview of our proposed D&R Distillation method. Stage 1: A large language model is prompted to
solve a knowledge-intensive multi-hop question by generating a series of subquestions and intermediate answers.
This interaction process is used to compose D&R Distillation samples. Stage 2: D&R Distillation samples are
used to finetune two student models, the Decomposer and the Responser. The Decomposer is responsible for
asking subquestions or determining the final answer based on current interaction history and the Responser is
responsible for answering subquestions with retrieved knowledge. Stage 3: The Decomposer and the Responser
solve a knowledge-intensive multi-hop question in an interactive process.
where R is a retriever and D is a knowledge base
(e.g., Wikipedia). Once the teacher model decide
to predict the final answer a[k]i [, we obtain a D&R]
Distillation sample (qi, s[1]i _[, r]i[1][, ..., s]i[k][−][1], ri[k][−][1], a[k]i_ [)][.]
Moreover, to control the quality of generated samples, we filter generated D&R Distillation samples
by comparing the final prediction a[k]i [of the teacher]
model with the ground truth ai. More detailed filter
criteria can be found in Appendix A.
**3.2** **_Decomposer and Responser Training_**
After acquiring D&R Distillation samples, we
use them to fine-tune two small student models,
namely the Decomposer p[d]θ [and the][ Responser]
_p[r]ϕ_ [with trainable parameters][ θ][ and][ ϕ][ respectively.]
Specifically, consider a D&R Distillation sample
(qi, s[1]i _[, r]i[1][, ..., s]i[k][−][1], ri[k][−][1], a[k]i_ [)][, for the][ Decomposer][,]
we minimize the negative log-likelihood of the sequence of subquestions s[j]i [(][j][ = 1][,][ 2][, ..., k][ −] [1)][ and]
the final answer a[k]i [:]
where H represents the interaction history before
_j-th step:_
_H =< qi, s[1]i_ _[, r]i[1][, ..., s]i[j][−][1], ri[j][−][1]_ _>_
For the Responser, we minimize the negative loglikelihood of the sequence of intermediate answer
_ri[j]_ [with augmented external knowledge:]
_Pi[j]_ [=][ topK][(][R][(][p][|][s]i[j][;][ D][)][, K][)]
_N_ _k_ (2)
_LR(ϕ) = −_ _i=1_ _j=1_ log p[r]ϕ[(][r]i[j][|][s]i[j][, P]i[ j][)]
X X
where R is the same retriever in 3.1.
**3.3** **_Decomposer and Responser Interaction_**
This section describes the behavior of two student models in the inference stage. After the
aforementioned two stages, the Decomposer and
the Responser work interactively to jointly solve
a knowledge-intensive multi-hop question. As
shown in Algorithm 1, we initiate with feeding
the initial question to the Decomposer, at the j-th
step, the Decomposer decides whether to ask another subquestion or predict the final answer based
on current interaction history H. If the generation
of the Decomposer is another subquestion, then
the Responser retrieves related knowledge from a
log p[d]θ[(][o][j]i _[|][H][)]_
_j=1_
X
_LD(θ) =_
_−_
(1)
_i=1_
(o[j]i [=][ a]i[j] _[if j][ =][ k else s]i[j][)]_
7807
-----
**Algorithm 1 Inference of D&R Distillation**
1: Initialization: H = _qi_, MAXSTEP _T_,
_{_ _}_ _←_
_j ←_ 0, p[d]θ[,][ p][r]ϕ[,][ R][,][ D][,][ K]
2: repeat
3: _o[j]i_ [=][ argmax][o][p]θ[d][(][o][|][H][)]
4: **if o[j]i** [is subquestion][ then]
5: _Pi[j]_ [=][ topK][(][R][(][p][|][s]i[j][;][ D][)][, K][)]
6: _ri[j]_ [=][ argmax][r][p]ϕ[r] [(][r][|][o]i[j][, P]i[ j][)]
7: _H.append(o[j]i_ _[, r]i[j][)]_
8: **end if**
9: **if o[j]i** [is final answer][ then]
10: break
11: **end if**
**CoT Distillation finetunes a student model with**
LLM-generated rationales, which is a typical approach for enhancing the reasoning capabilities of
SLMs (Ho et al., 2023). The above baselines measure the capability of a small language model to
solve knowledge-intensive multi-hop question answering relying only on parameter knowledge but
without any external knowledge.
**_Retrieval-Augmented Fine-tuning appends re-_**
trieved passages along with the question at both
training and inference time (Petroni et al., 2021).
**_Retrieval-augmented CoT Distillation augments_**
CoT Distillation with retrieved passages for both
teacher and student models (Kang et al., 2023).
The above two baselines help us to investigate the
impact of incorporating external knowledge.
**4.4** **Implementation Details**
We fine-tune student models for a maximum of 20
epochs with Pytorch-Lightning library[3], setting the
batch size at 16 and the learning rate at 3e − 4.
For Retrieval-augmented methods, we use
Wikipedia as the external knowledge base. For a
fair comparison, we use the sparse retrieval method
BM25 as the retriever provided by Pyserini library
4 for all baseline methods and our method. See
Appendix A for more detail.
**4.5** **Experimental Results**
In this section, we present the knowledge-intensive
reasoning performance of our D&R Distillation.
We compare our method with various baselines
across different model sizes.
As shown in Table 1, the improvement of Chainof-thought Distillation (CoT Distillation) compared
to Fine-tuning is quite limited, and in some cases,
even a performance decline has been observed. For
example, T5-base exhibits a mere 0.9% (32.5%31.6%) increase in Answer F1 on 2WikiMultiHopQA whereas it encounters a 0.4% (19.3%19.7%) drop in Answer F1 on HotpotQA. This
phenomenon can be highly attributed to the lack of
background knowledge. Although CoT Distillation
trains SLMs with augmentation of intermediate
reasoning steps, it remains a challenge for SLMs
to effectively reason without the necessary background knowledge.
The application of retrieval augmentation benefits both Fine-tuning and CoT Distillation. For
example, the utilization of retrieval augmentation
[3https://lightning.ai](https://lightning.ai)
[4https://github.com/castorini/pyserini](https://github.com/castorini/pyserini)
12: _j ←_ _j + 1_
13: until j =MAXSTEP
**Output: final answer o[j]i**
knowledge base and generates a response to the
subquestion. Otherwise, if the generation of the
_Decomposer is the final answer, the interaction ter-_
minates and returns the final answer.
**4** **Experiments**
**4.1** **Datasets**
We evaluate our method on three knowledgeintensive multi-hop question answering datasets in
the open-domain setting: HotpotQA (Yang et al.,
2018), 2WikiMultiHopQA (Ho et al., 2020), and
**StrategyQA (Geva et al., 2021). In contrast to pre-**
vious works (Ho et al., 2023) of fine-tuning with
the entire training set, we only fine-tune our model
with 8800 instances (1/10 of the full training data)
for HotpotQA, 16000 instances (1/10 of the full
training data) for 2WikiMultiHopQA, and 1200
(1/2 of the full training data) instances for StrategyQA, eliminating the need for generating a large
number of rationales with LLMs.
**4.2** **Teacher and Student Models**
For teacher models, we use GPT3.5 (Brown et al.,
2020) provided by the OpenAI API. Unless otherwise stated, we use gpt3.5-turbo-instruct as
the teacher model. For student models, we adopt
T5-{Small, Base, Large} (Raffel et al., 2020).
**4.3** **Baseline Methods**
We provide a comparison of D&R Distillation
(ours) with four baseline methods: Fine-tuning
directly fine-tunes a student model to generate an
answer given only a question (Petroni et al., 2021).
7808
-----
**_Data_** **HotpotQA** **2WikiMultiHopQA** **StrategyQA**
**Method** **Params**
|Col1|Usage|Answer EM Answer F1 Answer EM Answer F1 Answer Acc|
|---|---|---|
|Col1|Col2|Teacher: GPT3.5 (gpt3.5-turbo-instruct)|
|---|---|---|
|Few-shot-CoT|175B -|35.6 49.2 36.5 43.9 66.4|
|---|---|---|
|Col1|Col2|Student: T5 (small, base, large)|
|---|---|---|
|Fine-tuning (Petroni et al., 2021)|60M 220M All 700M|12.6 19.3 26.2 30.3 51.5 13.1 19.7 27.8 31.6 52.3 14.7 22.1 28.9 32.9 56.3|
|---|---|---|
|Retrieval-augmented Fine-tuning (Petroni et al., 2021)|60M 220M All 700M|14.6 (+2.0) 21.5 (+2.2) 27.4 (+1.2) 32.4 (+2.1) 51.1 (-0.4) 15.2 (+2.1) 22.1 (+2.4) 29.1 (+1.3) 33.6 (+2.0) 52.1 (-0.2) 17.3 (+2.6) 23.8 (+1.7) 31.2 (+2.3) 35.4 (+2.5) 58.8 (+2.5)|
|---|---|---|
|CoT Distillation (Ho et al., 2023)|60M 220M All 700M|12.2 (-0.4) 19.1 (-0.2) 26.8 (+0.6) 31.5 (+1.2) 52.8 (+1.3) 12.5 (-0.6) 19.3 (-0.4) 28.3 (+0.5) 32.5 (+0.9) 55.3 (+3.0) 16.9 (+2.2) 23 (+0.9) 30.6 (+1.7) 33.6 (+0.7) 64.4 (+8.1)|
|---|---|---|
|Retrieval-augmented CoT Distillation (Kang et al., 2023)|60M 220M All 700M|14.5 (+1.9) 21.6 (+2.3) 28.3 (+2.1) 32.7 (+2.4) 53.3 (+1.8) 14.7 (+1.6) 22.2 (+2.5) 30.1 (+2.3) 34.6 (+3.0) 56.6 (+4.3) 18.2 (+3.5) 25.5 (+3.4) 32.0 (+3.1) 35.8 (+2.9) 65.0 (+8.7)|
|---|---|---|
|D&R Distillation (ours)|60M 1/10 220M or 1/2 700M|18.2 (+5.6) 26.1 (+6.8) 29.5 (+3.3) 33.7 (+3.4) 55.0 (+3.5) 19.9 (+6.8) 27.9 (+8.2) 32.5(+4.7) 37.0 (+5.4) 59.0 (+6.7) 21.7 (+7.0) 30.4 (+8.3) 34.7 (+5.8) 39.4 (+6.5) 63.3 (+7.0)|
|---|---|---|
Table 1: D&R Distillation Performance. Answer EM/F1/Acc (%) of student models on three knowledge-intensive
multi-hop question answering datasets with D&R Distillation and baseline methods. (+/-) refers to the performance
gain/drop compared to the Fine-tuning baseline. For the larger-scale HotpotQA and 2WikiMultiHopQA datasets,
D&R Distillation only uses 1/10 of the full training data, and for the smaller-scale StrategyQA dataset, D&R
Distillation only uses 1/2 of the full training data.
leads to a noteworthy improvement in the performance of T5-base. It enhances the Answer F1 of
HotpotQA from 19.3% to 22.2% and increases the
Answer accuracy of StrategyQA from 55.3% to
56.6%. However, augmenting CoT Distillation
with a one-step retriever alone can not achieve
comparable results to our method except for the
StrategyQA dataset with T5-large. We attribute
this discrepancy to the nature of the StrategyQA
dataset, which consists of relatively easier yes/no
questions. Therefore, it becomes easier for a model
to find shortcuts to reach the final answer.
In contrast, D&R Distillation improves the
knowledge-intensive reasoning ability of SLMs by
a large margin and surpasses all baseline methods
with student models of different sizes. Moreover,
the performance gap between D&R Distillation
and Fine-tuning baseline enlarges as the number of
parameters of the student model increases. With
T5-large, D&R Distillation achieves an Answer F1
gain of 8.3% and 6.5% over Fine-tuning on HotpotQA and 2WikiHotpotQA respectively.
Furthermore, it is noteworthy that our approach
is trained using a significantly smaller fraction of
the data compared to the baseline methods. For
the larger-scale HotpotQA and 2WikiMultiHopQA
datasets, we utilize only 1/10 of the training data,
while for the smaller-scale StrategyQA dataset, we
use only 1/2 of the training data. The above findings highlight the significant advantages of our
method in terms of both performance and efficiency.
Unlike existing (Retrieval-augmented) CoT Distillation methods, which heavily rely on extensive
CoT annotations but struggle to effectively enhance
the model’s knowledge-intensive reasoning capabilities, our approach achieves superior performance,
despite utilizing only a small fraction of data.
**4.6** **Analysis**
**Efficiency on Model Size and Training Data**
To validate the efficiency of our D&R Distillation
method in terms of model size and training data, we
measure the Answer F1 on HotpotQA and 2WikiMultiHopQA varying model parameters and the
Answer F1 on HotpotQA varying the number of
training data. As shown in Figure 3a, D&R Distillation consistently outperforms the CoTD and RACoTD baselines varying different model sizes with
7809
-----
Fine-Tuning CoTD RA-CoTD FLAN-T5-XXL(11B) Ours
HotpotQA 2WikiMultiHopQA
Fine-Tuning RA-CoTD Ours
30
28
38
28
26
26 36
24
24 34
Answer F1 (%) Answer F1 (%) 22
22 Answer F1 (%)
32
20
20
30
60 220 700 60 220 700 1.0 2.5 10.0
Model Size(M) Model Size(M) Ratio of Training Data (%)
(a) Efficiency on model size
(b) Efficiency on training data
Figure 3: (a) Efficiency on model size and (b) training data. On HotpotQA and 2WikiMultiHopQA, we compare D&R
Distillation against CoT Distillation (CoTD) and Retrieval-augmented CoT Distillation (RA-CoTD) baselines, by varying the
number of parameters, including the few-shot in-context learning performance of Flan-T5-XXL (11B). On HotpotQA, we
compare D&R Distillation varying the number of training data with Fine-tuning and RA-CoTD baseline with full training data.
65
60
55
50
45
40
Recall (%)
35
30
25
20
|Col1|OneR|Ours|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
|||||
|||||
|||||
strates a significant performance improvement com
OneR Ours
HotpotQA 2WikiMultiHopQA StrategyQA
Figure 4: Retrieval Recall for one-step retriever (OneR)
_retrieval-augmented baseline methods and_
our D&R Distillation method. D&R Distillation demon
pared to OneR.
40
35
30
25
20
15
10
|CoTD w/o ret|w/o interaction reiver Ours|Col3|
|---|---|---|
||||
||||
||||
||||
||||
HotpotQA 2WikiMultiHopQA
Figure 5: Ablation study on the multi-step retrieval
and interactive process of D&R Distillation denoted as
**w/o retriever and w/o interaction respectively. The**
absence of either one of them will harm the answer F1
for answering knowledge-intensive multi-hop questions.
icantly outperforms the same model distilled by
RA-CoTD on the full data by 5.7% on answer F1.
With 2.5% of the training data, D&R Distillation
still outperforms RA-CoTD on the full data. This
indicates that we can potentially further increase
the effectiveness of D&R Distillation with more
training data.
**Evaluation on Knowledge Acquisition** To evaluate the effectiveness of D&R Distillation on
retrieving relevant knowledge for knowledgeintensive multi-hop question answering, we compare the retrieval recall of our method with one-step
retriever (OneR) adopted in Retrieval-augmented
only 1/10 of the entire training dataset. Notably, on
the HotpotQA dataset, D&R Distillation with two
60M student models achieves higher Answer F1
than CoTD with a 700M student model, whether
enhanced with Retrieval augmentation. Moreover,
D&R Distillation with two 220M student models
outperforms the 11B LLM (FLAN-T5-XXL) incontext learning baseline. This observation shows
a significant practical advantage of our approach
in resource-restricted settings since the SLM with
D&R Distillation requires significantly less computational cost yet it outperforms the LLM.
As shown in Figure 3b, the proposed D&R
Distillation method can successfully transfer the
knowledge-intensive reasoning ability, using only
a small number of training data. Specifically, with
10% of the training data, D&R Distillation signif
7810
-----
|Method|Retrieved Passages|Outputs|
|---|---|---|
Question: In what year was the novel that Louren Mutarelli based "Nina" on first published? (1866)
|RA-CoTD|(...)Lourenço Mutarelli has also created plays, books, and all the illustrations and animated sequences of the 2004 motion picture "Nina", which was based on the novel "Crime and Punishment" by Dostoyevsky(...)|The novel that Lourenço Mutarelli based "Nina" on, titled "Crime and Punishment", was frist published in 2002.|
|---|---|---|
|Ours|(...)Lourenço Mutarelli has also created plays, books, and all the illustrations and animated sequences of the 2004 motion picture "Nina", which was based on the novel "Crime and Punishment" by Dostoyevsky(...) (...)Crime and Punishment (...) It was first published in the literary journal "The Russian Messenger" in twelve monthly installments during 1866.(...)|What is the name of the novel that Louren Mutarelli based "Nina" on? Crime and Punishment. When was Crime and Punishment first published? 1866. So the final answer is: 1866.|
Question: Who is older, Annie Morton or Terry Richardson? (Terry Richardson)
|RA-CoTD|Annie Morton (born Terrence Richardson|October 8, 1970) is an American model(...) (born August 14, 1965) is an American fashion(...)|Annie Morton was born on October 8, 1970, while Terry Richardson was born on August 14, 1965. Therefore, Annie Morton is older.|
|---|---|---|---|
|Ours|Annie Morton (born October 8, 1970) is an American model(...) Terrence Richardson (born August 14, 1965) is an American fashion(...)||When was Annie Morton born? Annie Morton was born on October 8, 1970. When was Terry Richardson born? Terry Richardson was born on August 14, 1965. So the final answer is: Terry Richardson|
Table 2: Case Study of D&R Distillation (Ours), compared with Retrieval-augmented Distillation (RA-CoTD)
on HotpotQA with T5-base. The gold answer is in blue and the correct/wrong answer is marked as green/red We
highlight supporting facts in the passages as yellow.
**Case Study** In Table 2, we provide two examples from the HotpotQA dataset comparing the
output generated by our D&R Distillation against
the rationale by the baseline method Retrieval_augmented CoT Distillation (RA-CoTD). For the_
first question, RA-CoTD fails to retrieve a passage about Crime and Punishment, as a result, it
mistakenly generates the hallucination that "Crime
and Punishment" was first published in
2002. For the second question, RA-CoTD successfully retrieved the necessary knowledge for
answering the question, however, it fails to perform correct reasoning by mistakenly assuming
that Annie Morton (born in 1970) is older
than Terry Richardson (born in 1965).
In contrast, D&R Distillation successfully retrieves a passage about Crime and Punishment
by first generating subquestion When was Crime
and Punishment first published and retrieving based on the subquestion. Also, D&R Distillation performs the correct reasoning by predicting
that Terry Richardson is older. These examples highlight the effectiveness of our D&R Distillation method for reasoning interactively with adequately acquired relevant knowledge, which leads
to a notably improved performance for knowledgeintensive multi-hop questions.
**5** **Conclusion**
In this paper, we proposed Decompose-andResponse Distillation (D&R Distillation) which enhances the reasoning capabilities of small language
models (SLMs) on knowledge-intensive multi-hop
question answering. Our approach involves dis
baseline methods. As shown in Figure 4, our
method achieved significantly higher recall compared to OneR. Particularly, D&R Distillation
demonstrates a remarkable 20.6% superiority in recall over OneR on the 2WikiMultiHopQA dataset.
This indicates that by decomposing and retrieving
based on subquestions iteratively, D&R Distillation obtains a more sufficient set of knowledge to
answer knowledge-intensive multi-hop questions.
**Ablation Study** We conduct an ablation study to
demonstrate the effectiveness of two designs in our
method: 1) incorporating multi-step retrieval based
on subquestions and 2) interaction process between
_Decomposer and Responser. For 1), we disable_
the retriever and do not provide retrieved passages
for Responser, denoted as w/o retriever. For 2)
we train Decomposer to output all subquestions at
once and train the Responser to output all intermediate answers, as well as the final answer at once,
denoted as w/o interaction. We then compare the
Answer F1 of the two ablation settings with our
original design and the CoT Distillation (CoTD)
baseline. As shown in Figure 5, both of these designs are crucial for our method, as the absence
of either one would result in performance degradation. On the other hand, the performance without
either of these designs still surpasses that of CoTD,
demonstrating their strength. The performance decline becomes even more pronounced when the
retriever is removed (w/o retriever), further confirming the crucial role of background knowledge
for knowledge-intensive multi-hop reasoning.
7811
-----
tilling two student models separately, with one
student model focusing on decomposing subquestions and another student model focusing on answering subquestions with retrieved background
knowledge. Through extensive experiments, we
showed that D&R Distillation outperforms previous Chain-of-thought Distillation approaches with
much less training data.
**Limitations**
We conduct experiments on three knowledgeintensive multi-hop question-answering datasets,
demonstrating the effectiveness of D&R Distillation. However, our method is specially designed
for knowledge-intensive reasoning tasks. This limitation poses a constraint on the wider applicability
of our method. We plan to extend D&R Distillation to a wider range of reasoning tasks in the
future. On the other hand, due to limitations in
computational resources, we were unable to conduct experiments on larger-scale language models
(> 1B). We will further explore the performance of
D&R Distillation on larger-scale language models
in future research.
**Ethics Statement**
The proposed method has no obvious potential
risks. All the scientific artifacts used/created are
properly cited/licensed, and the usage is consistent
with their intended use. All the data used in this
work contains no private information.
**Acknowledge**
This work was supported by the Strategic Priority
Research Program of Chinese Academy of Sciences (No. XDA27020203) and the National Natural Science Foundation of China (No. 62376270,
No. 62276264) and OPPO Research Fund.
**References**
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, et al. 2020. Language models are few-shot
learners. Advances in neural information processing
_systems, 33:1877–1901._
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, et al. 2022. Palm: Scaling
language modeling with pathways. arXiv preprint
_arXiv:2204.02311._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Yao Fu, Hao Peng, Litu Ou, Ashish Sabharwal, and
Tushar Khot. 2023. Specializing smaller language
models towards multi-step reasoning. arXiv preprint
_arXiv:2301.12726._
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot,
[Dan Roth, and Jonathan Berant. 2021. Did aristotle](https://doi.org/10.1162/tacl_a_00370)
[use a laptop? a question answering benchmark with](https://doi.org/10.1162/tacl_a_00370)
[implicit reasoning strategies. Transactions of the](https://doi.org/10.1162/tacl_a_00370)
_Association for Computational Linguistics, 9:346–_
361.
Chengcheng Han, Xiaowei Du, Che Zhang, Yixin Lian,
[Xiang Li, Ming Gao, and Baoyuan Wang. 2023. Di-](https://doi.org/10.18653/v1/2023.emnlp-main.501)
[alCoT meets PPO: Decomposing and exploring rea-](https://doi.org/10.18653/v1/2023.emnlp-main.501)
[soning paths in smaller language models. In Proceed-](https://doi.org/10.18653/v1/2023.emnlp-main.501)
_ings of the 2023 Conference on Empirical Methods_
_in Natural Language Processing, pages 8055–8068,_
Singapore. Association for Computational Linguistics.
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023.
[Large language models are reasoning teachers. In](https://doi.org/10.18653/v1/2023.acl-long.830)
_Proceedings of the 61st Annual Meeting of the As-_
_sociation for Computational Linguistics (Volume 1:_
_Long Papers), pages 14852–14882, Toronto, Canada._
Association for Computational Linguistics.
Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara,
and Akiko Aizawa. 2020. [Constructing a multi-](https://doi.org/10.18653/v1/2020.coling-main.580)
[hop QA dataset for comprehensive evaluation of](https://doi.org/10.18653/v1/2020.coling-main.580)
[reasoning steps. In Proceedings of the 28th Inter-](https://doi.org/10.18653/v1/2020.coling-main.580)
_national Conference on Computational Linguistics,_
pages 6609–6625, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cheng-Yu Hsieh, Chun-Liang Li, Chih-kuan Yeh,
Hootan Nakhost, Yasuhisa Fujii, Alex Ratner, Ranjay
[Krishna, Chen-Yu Lee, and Tomas Pfister. 2023. Dis-](https://doi.org/10.18653/v1/2023.findings-acl.507)
[tilling step-by-step! outperforming larger language](https://doi.org/10.18653/v1/2023.findings-acl.507)
[models with less training data and smaller model](https://doi.org/10.18653/v1/2023.findings-acl.507)
[sizes. In Findings of the Association for Compu-](https://doi.org/10.18653/v1/2023.findings-acl.507)
_tational Linguistics: ACL 2023, pages 8003–8017,_
Toronto, Canada. Association for Computational Linguistics.
Aditya Kalyanpur, Siddharth Patwardhan, BK Boguraev, Adam Lally, and Jennifer Chu-Carroll. 2012.
Fact-based question decomposition in deepqa. IBM
_Journal of Research and Development, 56(3.4):13–1._
Minki Kang, Seanie Lee, Jinheon Baek, Kenji
Kawaguchi, and Sung Ju Hwang. 2023. Knowledgeaugmented reasoning distillation for small language
models in knowledge-intensive tasks. arXiv preprint
_arXiv:2305.18395._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in
7812
-----
_neural information processing systems, 35:22199–_
22213.
Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang
[Ren, Kai-Wei Chang, and Yejin Choi. 2023a. Sym-](https://doi.org/10.18653/v1/2023.acl-long.150)
[bolic chain-of-thought distillation: Small models can](https://doi.org/10.18653/v1/2023.acl-long.150)
[also “think” step-by-step. In Proceedings of the 61st](https://doi.org/10.18653/v1/2023.acl-long.150)
_Annual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), pages 2665–_
2679, Toronto, Canada. Association for Computational Linguistics.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2023b. Making](https://doi.org/10.18653/v1/2023.acl-long.291)
[language models better reasoners with step-aware](https://doi.org/10.18653/v1/2023.acl-long.291)
[verifier. In Proceedings of the 61st Annual Meet-](https://doi.org/10.18653/v1/2023.acl-long.291)
_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 5315–5333, Toronto,_
Canada. Association for Computational Linguistics.
[Ilya Loshchilov and Frank Hutter. 2019. Decoupled](https://openreview.net/forum?id=Bkg6RiCqY7)
[weight decay regularization. In International Confer-](https://openreview.net/forum?id=Bkg6RiCqY7)
_ence on Learning Representations._
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2023.
[Teaching small language models to reason. In Pro-](https://doi.org/10.18653/v1/2023.acl-short.151)
_ceedings of the 61st Annual Meeting of the Associa-_
_tion for Computational Linguistics (Volume 2: Short_
_Papers), pages 1773–1781, Toronto, Canada. Associ-_
ation for Computational Linguistics.
Pruthvi Patel, Swaroop Mishra, Mihir Parmar, and
[Chitta Baral. 2022. Is a question decomposition unit](https://doi.org/10.18653/v1/2022.emnlp-main.302)
[all we need?](https://doi.org/10.18653/v1/2022.emnlp-main.302) In Proceedings of the 2022 Confer_ence on Empirical Methods in Natural Language_
_Processing, pages 4553–4569, Abu Dhabi, United_
Arab Emirates. Association for Computational Linguistics.
Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick
Lewis, Majid Yazdani, Nicola De Cao, James Thorne,
Yacine Jernite, Vladimir Karpukhin, Jean Maillard,
Vassilis Plachouras, Tim Rocktäschel, and Sebastian
[Riedel. 2021. KILT: a benchmark for knowledge](https://doi.org/10.18653/v1/2021.naacl-main.200)
[intensive language tasks. In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.200)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2523–2544, Online._
Association for Computational Linguistics.
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt,
[Noah Smith, and Mike Lewis. 2023. Measuring and](https://doi.org/10.18653/v1/2023.findings-emnlp.378)
[narrowing the compositionality gap in language mod-](https://doi.org/10.18653/v1/2023.findings-emnlp.378)
[els. In Findings of the Association for Computational](https://doi.org/10.18653/v1/2023.findings-emnlp.378)
_Linguistics: EMNLP 2023, pages 5687–5711, Singa-_
pore. Association for Computational Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J Liu. 2020. Exploring the limits
of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research,
21(1):5485–5551.
Kumar Shridhar, Alessandro Stolfo, and Mrinmaya
[Sachan. 2023. Distilling reasoning capabilities into](https://doi.org/10.18653/v1/2023.findings-acl.441)
[smaller language models. In Findings of the Asso-](https://doi.org/10.18653/v1/2023.findings-acl.441)
_ciation for Computational Linguistics: ACL 2023,_
pages 7059–7073, Toronto, Canada. Association for
Computational Linguistics.
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
[Jonathan Berant. 2019. CommonsenseQA: A ques-](https://doi.org/10.18653/v1/N19-1421)
[tion answering challenge targeting commonsense](https://doi.org/10.18653/v1/N19-1421)
[knowledge. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1421)
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
4149–4158, Minneapolis, Minnesota. Association for
Computational Linguistics.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Harsh Trivedi, Niranjan Balasubramanian, Tushar Khot,
[and Ashish Sabharwal. 2023. Interleaving retrieval](https://doi.org/10.18653/v1/2023.acl-long.557)
[with chain-of-thought reasoning for knowledge-](https://doi.org/10.18653/v1/2023.acl-long.557)
[intensive multi-step questions. In Proceedings of](https://doi.org/10.18653/v1/2023.acl-long.557)
_the 61st Annual Meeting of the Association for Com-_
_putational Linguistics (Volume 1: Long Papers),_
pages 10014–10037, Toronto, Canada. Association
for Computational Linguistics.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H. Chi, Sharan Narang, Aakanksha Chowdhery,
[and Denny Zhou. 2023. Self-consistency improves](https://openreview.net/forum?id=1PL1NIMMrw)
[chain of thought reasoning in language models. In](https://openreview.net/forum?id=1PL1NIMMrw)
_The Eleventh International Conference on Learning_
_Representations._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural
_Information Processing Systems, 35:24824–24837._
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio,
William Cohen, Ruslan Salakhutdinov, and Christo[pher D. Manning. 2018. HotpotQA: A dataset for](https://doi.org/10.18653/v1/D18-1259)
[diverse, explainable multi-hop question answering.](https://doi.org/10.18653/v1/D18-1259)
In Proceedings of the 2018 Conference on Empiri_cal Methods in Natural Language Processing, pages_
2369–2380, Brussels, Belgium. Association for Computational Linguistics.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L. Griffiths, Yuan Cao, and Karthik R
Narasimhan. 2023. [Tree of thoughts: Deliberate](https://openreview.net/forum?id=5Xc1ecxO1h)
[problem solving with large language models.](https://openreview.net/forum?id=5Xc1ecxO1h) In
_Thirty-seventh Conference on Neural Information_
_Processing Systems._
Jianyi Zhang, Aashiq Muhamed, Aditya Anantharaman,
Guoyin Wang, Changyou Chen, Kai Zhong, Qingjun
Cui, Yi Xu, Belinda Zeng, Trishul Chilimbi, and Yi[ran Chen. 2023. ReAugKD: Retrieval-augmented](https://doi.org/10.18653/v1/2023.acl-short.97)
7813
-----
[knowledge distillation for pre-trained language mod-](https://doi.org/10.18653/v1/2023.acl-short.97)
[els. In Proceedings of the 61st Annual Meeting of the](https://doi.org/10.18653/v1/2023.acl-short.97)
_Association for Computational Linguistics (Volume_
_2: Short Papers), pages 1128–1136, Toronto, Canada._
Association for Computational Linguistics.
Shen Zheng, Jie Huang, and Kevin Chen-Chuan Chang.
2023. Why does chatgpt fall short in answering questions faithfully? arXiv preprint arXiv:2304.10513.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H.
[Chi. 2023. Least-to-most prompting enables com-](https://openreview.net/forum?id=WZH7099tgfM)
[plex reasoning in large language models. In The](https://openreview.net/forum?id=WZH7099tgfM)
_Eleventh International Conference on Learning Rep-_
_resentations._
**A** **Implementation Detail**
**Dataset** For HotpotQA and 2WikiMultiHop
datasets, we use the official dev split since the test
split is not publicly available. For StrategyQA, we
split the training set into a 9: 1 ratio to build the
in-house test set. Moreover, to control the quality
of generated samples, we discard generated D&R
Distillation samples if the F1 between the predicted
answer and the ground is below 0.7.
**Training and Inference** For all our experiments,
we fine-tune the small language model using the
AdamW optimizer (Loshchilov and Hutter, 2019).
We fine-tune student models for a maximum of 20
epochs, setting the batch size at 16 and the learning
rate at 3e − 4. All our experiments can be run on
2 NVIDIA GTX 3090 GPUs. For text generation,
we apply greedy decoding for all models following
(Wei et al., 2022; Kojima et al., 2022).
**Retriever** We use Wikipedia as the external
knowledge base and BM25 as the retriever. We
set TopK=3 for our retriever, for retrieved passages,
we keep the first 100 words for each passage.
**B** **Prompts**
Prompting examples for the three datasets can be
found on Table 3, Table 4, and Table 5.
7814
-----
Question: What is the elevation range for the area that the eastern sector of
the Colorado orogeny extends into?
Subquestion: What does the eastern sector of the Colorado orogeny extends into?
Intermediate answer: The eastern sector of Colorado orogeny extends into the High Plains.
Subquestion: What is the elevation range for the High Plains?
Intermediate answer: High Plains rise in elevation from around 1,800 to 7,000 ft.
So the final answer is: 1,800 to 7,000 ft
Question: Musician and satirist Allie Goertz wrote a song about the "The Simpsons"
character Milhouse, who Matt Groening named after who?
Subquestion: Who is the "The Simpsons" character Milhouse named after.
Intermediate answer: Richard Milhous Nixon
So the final answer is: Richard Milhous Nixon
Question: Which documentary is about Finnish rock groups, Adam Clayton Powell or The Saimaa Gesture?
Subquestion: What is the documentary Adam Clayton Powell (film) about?
Intermediate answer: Adam Clayton Powell (film) is a documentary about an African-American politician.
Subquestion: What is the documentary The Saimaa Gesture (film) about?
Intermediate answer: The Saimaa Gesture is a film about three Finnish rock groups.
So the final answer is: The Saimaa Gesture
Question: Which magazine was started first Arthur’s Magazine or First for Women?
Subquestion: When was Arthur´s Magazine started?
Intermediate Answer: Arthur´s Magazine was started in 1844.
Subquestion: When was First for Women started?
Intermediate Answer: First for Women was started in 1989.
So the final answer is: Arthur´s Magazine
Table 3: Prompts for the HotpotQA dataset.
Question: When did the director of film Hypocrite (Film) die?
Subquestion: Who directed the film Hypocrite (Film)?
Intermediate answer: Miguel Morayta.
Subquestion: When did Miguel Morayta die?
Intermediate answer: Miguel Morayta died on 19 June 2013.
So the final answer is: 19 June 2013
Question: Are both Kurram Garhi and Trojkrsti located in the same country?
Subquestion: Which country is Kurram Garhi located in?
Intermediate answer: Kurram Garhi is located in the country of Pakistan.
Subquestion: Which country is Trojkrsti located in?
Intermediate answer: Trojkrsti is located in the country of Republic of Macedonia.
So the final answer is: No
Question: Which album was released earlier, What’S Inside or Cassandra’S Dream (Album)?
Subquestion: When was the album What’s Inside released?
Intermediate answer: What’s Inside was released in the year 1995.
Subquestion: When was the album Cassandra’S Dream (Album) released?
Intermediate answer: Cassandra’s Dream (album) was released in the year 2008.
So the final answer is: What’s Inside
Question: What is the cause of death of Grand Duke Alexei Alexandrovich Of Russia’s mother?
Subquestion: Who is the mother of Grand Duke Alexei Alexandrovich of Russia?
Intermediate answer: Maria Alexandrovna.
Subquestion: What is the cause of death of Maria Alexandrovna?
Intermediate answer: Maria Alexandrovna died from tuberculosis.
So the final answer is: Ytuberculosis
Table 4: Prompts for the 2WikiMultiHop dataset.
7815
-----
Question: Could the members of The Police perform lawful arrests?
Subquestion: Who can perform lawful arrests?
Intermediate answer: Only law enforcement officers can perform lawful arrests.
Subquestion: Are members of The Police also?
Intermediate answer: No, The members of The Police were musicians, not law enforcement officers.
So the final answer is: No
Question: Is a Boeing 737 cost covered by Wonder Woman (2017 film) box office receipts?
Subquestion: How much does a Boeing 737 cost?
Intermediate answer: The average cost of a US Boeing 737 plane is 1.6 million dollars.
Subquestion: How much did the 2017 movie Wonder Woman gross?
Intermediate answer: Wonder Woman (2017 film) grossed over 800 million dollars at the box office.
So the final answer is: Yes
Question: Would a Monoamine Oxidase candy bar cheer up a depressed friend?
Subquestion: Depression is caused by low levels of what chemicals?
Intermediate answer: Depression is caused by low levels of serotonin, dopamine and norepinephrine.
Subquestion: Can Monoamine Oxidase lowers levels of serotonin, dopamine and norepinephrine?
Intermediate answer: No, Monoamine Oxidase breaks down neurotransmitters
and lowers levels of serotonin, dopamine and norepinephrine.
So the final answer is: No
Question: Is the language used in Saint Vincent and the Grenadines rooted in English?
Subquestion: What language is used in Saint Vincent and the Grenadines?
Intermediate answer: The primary language spoken in Saint Vincent and the Grenadines is Vincentian Creole.
Subquestion: Is Vincentian Creole based in English?
Intermediate answer: Yes, Vincentian Creole is English-based.
So the final answer is: Yes
Table 5: Prompts for the StrategyQA dataset.
7816
-----
| [
"Jun, Zhao",
"Xiang, Li",
"Vivek, Srikumar",
"Fangyu, Lei",
"JunYang, JunYang",
"Shizhu, He",
"Tianhuang, Su",
"Kang, Liu",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Findings | false | 1 | 0 | null | https://aclanthology.org/2024.findings-acl.464 | null | https://www.semanticscholar.org/paper/cd04393173e6b489cab6b0dfb4843bbad316466a |
Testing the limits of logical reasoning in neural and hybrid models | We study the ability of neural and hybrid models to generalize logical reasoning patterns. We created a series of tests for analyzing various aspects of generalization in the context of language and reasoning, focusing on compositionality and recursiveness. We used them to study the syllogistic logic in hybrid models, where the network assists in premise selection. We analyzed feed-forward, recurrent, convolutional, and transformer architectures. Our experiments demonstrate that even though the models can capture elementary aspects of the meaning of logical terms, they learn to generalize logical reasoning only to a limited degree. | This work created a series of tests for analyzing various aspects of generalization in the context of language and reasoning, focusing on composition-ality and recursiveness, and demonstrated that even though the models can capture elementary aspects of the meaning of logical terms, they learn to generalize logical reasoning only to a limited degree. | # Testing the limits of logical reasoning in neural and hybrid models
**Manuel Vargas Guzmán**
University of Warsaw
[email protected]
**Abstract**
**Jakub Szymanik**
University of Trento
[email protected]
**Maciej Malicki**
Institute of Mathematics of the
Polish Academy of Sciences
[email protected]
We study the ability of neural and hybrid models to generalize logical reasoning patterns. We
created a series of tests for analyzing various
aspects of generalization in the context of language and reasoning, focusing on compositionality and recursiveness. We used them to study
the syllogistic logic in hybrid models, where
the network assists in premise selection. We analyzed feed-forward, recurrent, convolutional,
and transformer architectures. Our experiments
demonstrate that even though the models can
capture elementary aspects of the meaning of
logical terms, they learn to generalize logical
reasoning only to a limited degree.
**1** **Introduction**
Despite the enormous successes of models based
on deep learning, we still need to know more about
how and what these models learn. The question
of fundamental importance is to what extent they
can ‘grasp’ the rules (or – more generally – the
structure) governing involved data and tasks. It
can be phrased as the problem of generalization,
i.e., the ability to perform on data unseen during
training.
Language structure is well understood from several perspectives: grammar, semantics, or rules of
reasoning have been extensively studied and successfully formalized. However, even in this area,
despite the available theoretical background, the
methodology for studying generalization is still
not well developed. The need for a systematic approach to this problem is indicated by a recent survey (Hupkes et al., 2023) of generalization research
in NLP.
So far, the study of neural models for tasks related to logic and reasoning is rather limited. An
early attempt is Bowman et al. (2015), where networks learn logical relations, such as entailment,
between pairs of sentences in a simple artificial
language. More recent work Ontanon et al. (2022)
_A_
_A_
a)
_A_
_a_
_X1 = [Aab, Abc, Abd, Aef, Edf, Ofe, Eae]_
b)
_Y1 = [_ 1 0 1 1 1 0 ]
_X2 = [Aab, Abc, Abd, Aef, Edf, Ofe, Eab]_
c)
_Y2 = [_ 0 0 0 0 0 0 ]
Figure 1: a) Example of a simple knowledge base
_KB_ = _{Aab, Abc, Abd, Aef, Edf, Ofe} b) Exam-_
ple of input X1 and label Y1 to build the inference
_{Aab, Abd, Aef, Edf_ _} ⊢_ _Eae. The input always con-_
tains the whole knowledge base KB and a hypothesis H
at the end. Formulas are encoded as 1-hot vectors; the
label is a binary vector indicating the premises needed
to derive H, if it is valid, or the 0-vector, otherwise. c)
Example of input X2 and label Y2 for invalid hypothesis
_Eab._
involves models that determine whether a given inference can be proved from a given set of premises
by providing the list of inference rules as an output.
In Clark et al. (2021), models learn to reason with
prescribed rules, while in Schlegel et al. (2022), the
authors consider models deciding whether a given
set of sentences is consistent. It is worth mentioning that investigating reasoning has a sound linguistic motivation. To take a straightforward example,
it is hard to argue that a model grasps the meaning
of quantifier "all" if it is not able to perform reasonings of the form: "All a are b" and "All b are c"
implies "All a are c."
In this paper, we focus on logical reasoning in
the syllogistic fragment of the natural language.
The syllogistic logic has nice properties, e.g.,
soundness and completeness. Notably, the logic is
2267
-----
non-trivial but still sufficiently elementary to play
the role of a benchmark for models of reasoning.
We investigate the generalization of inference
patterns in the training data and the following task.
The network, presented with a knowledge base
_KB (i.e., a set of premises) and a hypothesis H_
selects the premises required to construct a proof
of H from KB (if it exists); see Figure 1 for an
illustrative example. Thus, one can think of our
models as hybrid models: by selecting premises,
the network assists the prover that is supposed to
construct a proof. The paper can also be described
as a study of reasoning in the presence of multiple
premises, a research line rarely explored in deep
learning.
We are mainly interested in two aspects of generalization: recursiveness (elements can be iteratively combined) and compositionality (structures
are determined by their constituents). It is worth
emphasizing that they are frequently conflated even
though conceptually different. There are fully recursive systems that are not compositional, the bestknown example being Tarski’s interpretation of
first-order logic; see Janssen and Partee (1997) for
a detailed discussion and more examples. There
are also fully compositional structures with limited recursiveness, e.g., Boolean operations on a
finite family of sets are compositional but can be
combined only in a finite number of ways.
In the context of reasoning, we will say that a
model processes inferences in a recursive manner if
it is capable of applying inference patterns learned
during training to more complex instances. Going
back to the previous example, if the model knows
that “All a are b” and "All b are c" implies “All
_a are c,” it should also be able to conclude from_
the extra piece of information “All c are d” that
“All a are d.” Compositionality means the converse
situation: provided that the model knows how to
apply an inference pattern to complex instances, it
should be able to do so for simpler ones. In other
words, the derivation of “All a are d” from “All a
are b,” “All b are c,” and “All c are d,” should be
accompanied by the derivation of “All a are c.”
In the study, we employed different types of architectures, Multilayer Perceptron, Recurrent Neural Networks, Convolutional Neural Networks, and
Transformers, to compare their performance and
capabilities for generalization on artificially generated syllogistic corpora. On the surface of things,
the models manage to learn the assigned task almost perfectly (see Table 3); in particular, the gen
eralization gap, which is a standard measure of generalization, is very low. However, the experiments
designed to verify recursive and compositional generalization reveal that neural networks—and the
hybrid models they comprise—poorly generalize,
regardless of architecture. In particular, this sheds
light on the purported superiority of the transformer
architecture. On the positive side, some evidence
for recursive generalization can be observed.
Last but not least, one of our primary goals is to
contribute to developing a methodology for investigating generalization in the context of recursiveness and compositionality. The approach proposed
in this paper can be exploited in other settings,
either directly related to reasoning, e.g., other fragments of language and inference systems, or not,
e.g., sequence-to-sequence models studied in Hupkes et al. (2019) and Lake and Baroni (2023).
**2** **Syllogistic Logic**
Pratt-Hartmann Pratt-Hartmann (2004) defines a
_fragment of a natural language as a subset of that_
language with an uncontroversial translation into
a formal language that reconstructs logical entailment. The syllogistic fragment, first introduced
and studied by Aristotle, is the simplest non-trivial
language fragment. Aristotle considered only syllogisms consisting of two premises and a conclusion.
A well-known example is “If all men are mortal
and all Greeks are men, then all Greeks are mortal.” However, classical syllogistic can be easily
extended to inferences involving more than two
premises, see, e.g., Łukasiewicz (1951); Smiley
(1973). In our setting, only general names with nonempty denotations are allowed. Thus, “Socrates is
a man” is not a syllogistic formula for us, while
“Every unicorn is an animal” implies “Some unicorn is an animal”.
**2.1** **Language**
The syllogistic comprises the formulas Aab (“Every a is b”), Eab (“No a is b”), Iab (“Some a is
_b”), and Oab (“Some a is not b”). The former_
two are called universal formulas since the translation to the first order logic is ∀x.[A(x) → _B(x)]_
and ∀x.[A(x) →¬B(x)], respectively. And the
latter two are existential formulas represented as
_∃x.[A(x)_ _∧_ _B(x)] and ∃x.[A(x)_ _∧¬B(x)], respec-_
tively. Note that translations of Aab and Oab are
contradictory, and so are Iab and Eae. Moreover,
existential formulas are symmetric, i.e., Iab and
2268
-----
(1) {Aa − _b, Ac −_ _d, Oad} ⊢_ _Obc_
(2) {Aa − _b} ⊢_ _Aab_
(3) {Aa − _b, Ac −_ _d, Aa −_ _e, Ede} ⊢_ _Obc_
(4) {Aa − _b, Aa −_ _c} ⊢_ _Ibc_
(5) {Aa − _b, Ac −_ _d, Ae −_ _f, Iae, Edf_ _} ⊢_ _Obc_
(6) {Aa − _b, Ac −_ _d, Ebd} ⊢_ _Eac_
(7) {Aa − _b, Ac −_ _d, Iac} ⊢_ _Ibd_
Table 1: List of all types of syllogistic inferences
_Iba have equivalent translations, and so do Eab_
and Eba.
We define a language as follows: let V =
(Q, C) be a vocabulary of quantifier symbols
_Q = {A, E, I, O} and constant symbols C =_
_{a, b, c, . . .}._ Formulas are built as Axy, Exy,
_Ixy, or Oxy, where x, y ∈C, x ̸= y. In particular,_
_Aaa is not a formula._
There is no negation in our language; however,
we denote the “contradiction” of a formula F by
_F_, i.e., Aab = Oab, Oab = Aab, Iab = Eab, and
_Eab = Iab._
An A-chain, denoted as Aa − _b, represents ei-_
ther the formula Aab or the sequence of two or
more formulas Aac1, Ac1c2, . . ., Acn−1cn, Acnb
(for n ≥ 1). Finally, a knowledge base is a finite
set of formulas or premises.
**2.2** **Types of syllogistic inferences**
In this paper, we follow Smiley (1973). However,
we do not delve into details; in particular, we do not
specify the proof system because it does not matter
in our framework. The aforementioned translation
of syllogistic formulas into first-order logic allows
for interpreting formulas by interpreting constants
as non-empty unary predicates. This is sufficient
to define the notions of consistency and inference.
A set F of formulas is consistent if there is an
interpretation of constants that makes all formulas
in F true. A formula F is a conclusion from a set
of premises F if F ∪{F _} is inconsistent. We write_
_F ⊢_ _F for the inference formed by premises F_
and conclusion F . Given a knowledge base KB, a
hypothesis H is valid if KB ⊢ _H, otherwise H is_
_invalid._
In the paper, we are interested in minimal
inferences, i.e., inferences F _⊢_ _F such that_
_F_ _[′]_ _̸⊢_ _F for any proper subset F_ _[′]_ _⊂F._ For
example, {Abc, Abd} ⊢ _Icd is minimal, while_
_{Aab, Abc, Abd} ⊢_ _Icd is not because Aab is not_
needed to infer the conclusion. Minimal inferences
correspond to antilogisms, i.e., minimal inconsistent sets of syllogistic formulas.
**Theorem 1 (Smiley (1973)). Every antilogism is of**
_the following form {Aa −_ _b, Oab}, {Aa −_ _b, Aa −_
_c, Ebc}, or {Aa −_ _b, Ac −_ _d, Iac (or Ica), Ebd}._
**Theorem 2 (Smiley (1973)). Let F be a formula**
_and F be a set of formulas. F ∪{F_ _} is an antilo-_
_gism if and only if F ⊢_ _F_ _, and F ⊢_ _F is minimal._
All minimal syllogistic inference types can be
easily recovered from the above theorems. The fi
_O_
a)
b)
_O_
c)
_I_
Figure 2: Diagrams illustrating examples of types of
syllogistic inferences (dashed lines represent conclusions) a) Type (1) {Aa − _b, Ac −_ _d, Oad} ⊢_ _Obc b)_
Type (6) {Aa − _b, Ac −_ _d, Ebd} ⊢_ _Eac c) Type (5)_
_{Aa −_ _b, Ac −_ _d, Ae −_ _f, Iae, Edf_ _} ⊢_ _Obc_
nal list is presented in Table 1 (see A.1 for more details). To cover all syllogisms, symmetric formulas
need to be used interchangeably, e.g., Ixy = Iyx;
formulas of the form Aaa are disregarded.
To give the reader a better idea of what syllogistic inferences look like, we present in Figure 2
diagrams illustrating some of them.
We say that an inference F ⊢ _F can be decom-_
posed into inferences 1 _F1,_ 2 _F1_ _F_,
_F_ _⊢_ _F_ _∪{_ _} ⊢_
if F = F1∪F[˙] 2, i.e., the premises can be split
into two disjoint subsets F1 and F2 so that F1
forms premises of the first inference, and 2, to_F_
gether with the conclusion F1 from F1, forms the
premises of the second one. The main observation
here is that every inference can be decomposed into
an inference with all A-chains of length 1 and an
inference of type 2 (see Table 1). Other decompositions, discussed in 4.2, are also possible for some
inference types.
2269
-----
**3** **Synthetic Data and Neural Models**
In order to avoid problems related to the choice of
premises needed to infer a given hypothesis, we
used only non-redundant knowledge bases, i.e.,
knowledge bases such that for every valid hypothesis, there is a unique minimal set of premises that
proves it.
We represent a knowledge base as a graph where
vertices are constants and edges denote quantifiers
(see Figure 1a). All A-formulas are a set of m disjoint trees T1, . . ., Tm (or a forest). Each tree is
a directed graph Ti = (V, E) such that there is at
most one path between any two vertices. We created synthetic consistent non-redundant knowledge
bases KB for training and testing the neural models
using the following general algorithm:
1. Randomly generate a forest where each vertice corresponds to a constant and every directed path between two vertices corresponds
to an A-chain.
2. For every pair of (different) trees (Ti, Tj):
Add one E-formula and one I-formula between Ti and Tj.
3. For each tree Ti:
Add O-formulas within Ti.
We randomly add formulas (steps 2. and 3.) such
that there is no redundancy and the set KB remains
consistent.
For every experiment we generated a consistent non-redundant knowledge base KB =
_P1, . . . Pn_ of n premises. We trained neural mod_{_ _}_
els using a multi-label approach and supervised
learning techniques. Each element of the dataset
consists of an input X associated with a label Y .
The input vector X encodes the knowledge base
_KB and a hypothesis H. For a valid H, the label Y_
is a binary vector of size n that tags all the necessary premises to derive H by assigning 1 to every
_Yi if KB \ {Pi} ̸⊢_ _H and 0, otherwise. For invalid_
_H, Y is the zero vector._
We stratify the training/test split by types of inferences, for every valid type we train 75% and
test on the remaining 25%. For invalid hypotheses,
we only train 20%, since they make up more than
80% of the data (see Table 2 for distribution of hypotheses). In some experiments, the stratification
somewhat differs (i.e., when we remove an inference type from the training data), but, in general,
we stick to the above stratification principles.
We used one-hot encodings to produce input vectors. Each constant and quantifier are represented
as a one-hot vector of dimension d (where d is the
size of the vocabulary). We also tested word embeddings to encode knowledge bases, but one-hot
encodings give better performance (see A.2).
**4** **Experiments and Results**
The data and scripts to run these experiments are
available online[1]. We randomly generated 5 consistent knowledge bases. Each of them consists of 4
trees, 66 constants, and 78 formulas. We made sure
that no valid hypothesis gave rise to the same label
in two different knowledge bases. In the following
experiments, we trained 4 different architectures
of neural models: Multilayer Perceptron (MLP),
_Recurrent Neural Network (RNN), Convolutional_
_Neural Network (CNN), and Transformers (TRA)_
(as a matter of fact, we also considered some variants of these architectures, e.g., LSTM or GRU, but
the results were very similar). We employed gridsearch techniques to optimize the configurations
for overall performance. The detailed description
of optimization procedures and final specifications
can be found in A.3. We trained each knowledge
base for 3 runs.
Being part of a hybrid model, networks are supposed to provide premises for the prover. Therefore,
beside the standard measure of accuracy (correct
label), we consider another one: an output is correct if it involves all the necessary premises, i.e.,
it is a correct but not necessarily minimal (NNM)
inference.
**4.1** **Overall Performance**
In the first experiment, we checked the overall accuracy of the models for the split described in 3.
The results are shown in Table 3 (more details in
A.4). Clearly, the numbers are high enough to exclude a large generalization gap (see, e.g., Hoffer
et al. (2017)), i.e., a substantial difference in performance on the training and on the test data (see
A.5 for exact values). A large generalization gap
would indicate that the model excessively memorizes (overfits) training data. However, as the next
experiments show, the generalization gap is not a
good measure of compositional and recursive generalization.
We also verified how the models generalize basic non-compositional and non-recursive features
[1https://github.com/manuel-vg/syllogistic-logic](https://github.com/manuel-vg/syllogistic-logic)
2270
-----
Type 1 2 3 4 5 6 7 Valid Invalid All
# Inf. 124 334 519 1026 157 245 622 3027 14133 17160
Table 2: Data distribution of the 5 knowledge bases used for training (mean # of inferences by type and validity)
|Inf.|Best|Mean|SD|
|---|---|---|---|
|Val. Inv. All|93.9 97.1 96.6|83.2 94.2 93.5|13.1 2.5 3.1|
|Val. Inv. All|95.9 98.3 98.0|93.5 97.7 97.4|1.3 0.5 0.4|
|Val. Inv. All|94.3 97.3 96.9|92.0 96.7 96.4|1.3 0.3 0.2|
Model Inf. Best Mean SD NNM
Val. 93.9 83.2 13.1 88.9
MLP Inv. 97.1 94.2 2.5 –
All 96.6 93.5 3.1 –
Val. 95.9 93.5 1.3 95.3
RNN Inv. 98.3 97.7 0.5 –
All 98.0 97.4 0.4 –
Val. 94.3 92.0 1.3 94.4
CNN Inv. 97.3 96.7 0.3 –
All 96.9 96.4 0.2 –
Val. 96.6 93.6 2.9 95.7
TRA Inv. 97.8 96.3 1.3 –
All 97.7 96.1 1.3 –
Table 3: Overall accuracy: best, mean, standard deviation (SD), mean accuracy for not-necessarily-minimal
correct inferences (NNM), for valid (Val.), invalid (Inv.),
and all hypotheses, respectively (see 4.1)
of the syllogistic logic: Principle of Contradiction
(either H or H is invalid), non-empty denotations
of constants (if Aab is valid, then Iab is valid), as
well as the symmetry of formulas Iab and Eab.
The level of generalization is very high (see A.7).
It suggests that the models learned at least elementary aspects of the meaning of involved terms (see
Discussion).
**4.2** **Compositionality**
**Unseen Short Lengths.** We define the length of
inference as the total length of all A-chains, i.e.,
the number of A-formulas among the premises. To
perform the unseen lengths experiments, for the
training data, we removed inferences either with
short or with long lengths, the length depending
on inference type (this is because maximal lengths
_µ(t) represented in the knowledge base depend_
on inference type t). Then we test only on the
eliminated inferences.
In this experiment, we removed inferences of
length 5 and less. Accuracies calculated for every unseen length separately are shown in Figure
3 (the left plot). A sharp and consistent drop in
performance can be observed, depending on how
far the tested length is from the lengths present in
the training data.
We interpret these results as a clear sign of a lack
of compositionality. The models are able to perform well on the longer inferences without being
able to perform on shorter ones, even though the
latter form parts of the former. To take a simple
inference of type 2 as an example, if the model
is able to conclude from Aab, Abc, Acd that Aad
but not that Aac, it means that this inference is not
compositional.
**Removing an inference type.** For this experiment, we proceeded to split the training/test dataset
in a way similar to that described in 3, the only difference being that an entire type of inference is removed from the training dataset. We then checked
the performance by testing on each type separately.
Table 4 presents the results (mean accuracy) of tests
on the removed type, which are most relevant from
our perspective.
The first observation is that the categorization of
the data based on inference types is not spurious.
The models are essentially incapable of finding inferences of types that are not present in the training
data. On the other hand, these results confirm our
conclusion from the experiment on short unseen
lengths: the models do not use compositional inferences.
Compositional inferences presuppose recognizing the inferential structures of its parts. It has been
noted in 2.2 that an inference of every type can
be decomposed into two inferences, one of which
is of type 2. There are other possible decompositions. For example, an inference of type 5 requires
knowledge that Iea and Aa − _b imply Ieb, i.e.,_
it can be decomposed into two inferences, one of
which is of type 7. There are similar relationships
between type 5 and type 6, or type 3 and types 6
and 7. Therefore removing a type from the training
data would not completely annihilate performance
on this type for a model that processes inferences
in a compositional manner.
The only exception is type 3, on which all the
architectures exhibit non-zero performance after
removing it from the training data. However, this
can be explained by the models’ grasping the nonempty denotations of constants (i.e., that Aab implies Iab). With the aid of this generalization, type
3 can be derived from type 5. Indeed, after remov
2271
-----
Model 1 2 3 4 5 6 7
MLP 0.0 0.0 7.1 0.1 0.0 0.0 0.0
RNN 0.0 0.0 18.5 0.6 0.0 0.0 0.0
CNN 0.0 0.0 7.0 0.3 0.0 0.0 0.0
TRA 0.0 0.0 13.2 2.5 0.1 0.0 0.0
Table 4: Mean accuracy for testing on a type that was removed from the training data
Figure 3: Performance on unseen lengths for short inferences (left) and long inferences (right). The models
are trained on inferences of length more than 5 (left) or
less than µ(t) − 4 (right), where µ(t) is the maximal
length for type t. Then they are tested on the lengths
removed from training. The plots show accuracies for
each removed length separately.
Figure 4: Unseen lengths for short inferences (left) and
long inferences (right) without type 2 for test.
ing additional type 5, the performance of type 3
drops to zero.
**4.3** **Recursiveness**
**Unseen Long Lengths.** This experiment is similar to the experiment on unseen short lengths but
with the longest inferences removed from the training data. The results for inferences of length more
than µ(t) − 5 removed (i.e., the 5 longest lengths
for each type), and accuracies calculated for every unseen length separately are shown in Figure 3.
Again, we can see a very clear drop in performance,
depending on the distance of the length from the
lengths seen in training. It means that the models are not able to perform inferences much longer
than those used for training. As a matter of fact,
Figure 5: Performance on unseen lengths for short inferences (left) and long inferences (right), except for type
2. The experiments are as in Figure 4, but with all the
lengths for type 2 inferences used in training (see 4.3
for details).
for inferences longer only by 1, the accuracy is still
high.
Clearly, every inference type has a recursive
structure: longer inferences can be constructed
from shorter ones by extending the involved Achains. This kind of recursiveness, consisting of
the iterative application of a rule, is termed in Hupkes et al. (2019) as productivity. Thus, we can
interpret the results of this experiment as a sign
of a lack of productivity. On the other hand, the
results, when only inferences of maximal length
are removed, indicate that some local extrapolation
takes place.
**Unseen Lengths except for type 2.** In these two
experiments, we removed from the training data
either the shortest or the longest inferences, except
for inferences of type 2. These are selected without
any restrictions on the length (but not included in
the test data). The results are presented in Figure 5.
The performance drops, but the change is smaller
as compared to the experiments on unseen lengths
described above and in 4.2 (see Figure 4 for comparison). This is particularly evident for TRAs,
e.g., for short unseen lengths 1,2,3, the difference
is 16.6, 18.4 and 15.2, respectively. For long unseen lengths, the corresponding values are 5, 14.4
and 18.7. Interestingly, RNNs and CNNs do not
seem to considerably benefit from extra training
2272
-----
|Best|Mean|
|---|---|
|0.0|0.0|
|0.0|0.0|
|0.0|0.0|
Model Best Mean SD
MLP 0.0 0.0 0.0
RNN 0.0 0.0 0.0
CNN 0.0 0.0 0.0
TRA 0.0 0.0 0.0
Table 5: Unseen combinations of premises
data. A more detailed discussion of the general
performance of the architectures will be carried out
in a separate section.
We interpret these results as a sign of some capabilities of the models to combine inferences, i.e.,
as evidence for some level of recursiveness. As
it was pointed out in the introduction, compositionality and recursiveness are distinct categories
of language and language processing, so our findings from this and the previous section, indicating
a lack of compositionality and some presence of
recursiveness, are not contradictory.
**Unseen combinations of premises.** In this experiment, we select a set ∆ of formulas forming
an A-chain from the knowledge base, remove from
the training data inferences F ⊢ _F such that_
_|F ∩_ ∆| > 1,
and test on the removed inferences. In other words,
during training, the models do not see inferences
that combine two or more premises from ∆. This
aspect of generalization is termed systematicity in
Hupkes et al. (2019).
For n ∈{2, 4, 6, 8}, we randomly selected an Achain ∆ of length n, and performed the experiment.
The results presented in Table 5 (mean for all values
of n) are rather extreme: all architectures exhibited
zero accuracy. Apparently, in order for the models
to be able to employ a combination of premises in
an inference, the premises need to be seen together
in some inference during training. It is true even of
the simplest inferences like {Aab, Abc} ⊢ _Aac._
**4.4** **Testing on a new knowledge base**
In the last experiments, we went beyond the general
framework of the study. We substantially increased
the distance between the training and the test data,
employing a new knowledge base for testing. We
selected 3 bases with no overlapping labels for a
given hypothesis and repeated the experiment 6
times for every combination of the base used for
training and for testing.
|Inf.|Best|Mean|
|---|---|---|
|Val. Inv. All|0.1 67.5 55.2|0.0 63.3 51.8|
|Val. Inv. All|0.0 30.1 24.5|0.0 11.2 9.2|
|Val. Inv. All|0.0 100.0 81.7|0.0 95.3 78.0|
Model Inf. Best Mean SD
Val. 0.1 0.0 0.0
MLP Inv. 67.5 63.3 2.3
All 55.2 51.8 1.9
Val. 0.0 0.0 0.0
RNN Inv. 30.1 11.2 6.6
All 24.5 9.2 5.4
Val. 0.0 0.0 0.0
CNN Inv. 100.0 95.3 6.6
All 81.7 78.0 5.4
Val. 0.0 0.0 0.0
TRA Inv. 83.1 81.9 0.7
All 67.9 67.1 0.6
Table 6: Overall accuracy for tests on new knowledge
bases
The results in Table 6 show that in this setting,
the models generalize poorly. In particular, the
accuracy on valid hypotheses is always zero. A
more detailed analysis of the results reveals (see
A.6) that some architectures learn to ignore the
knowledge base part of the input and, regardless of
the test data, produce labels that correspond to the
base used for training. This is true of TRAs, and,
to a lesser extent, of CNNs and MLPs. However,
RNNs do not memorize in this way.
On the other hand, CNNs exhibit almost perfect
performance (mean: 95.3%) on invalid hypotheses,
and this behavior cannot be explained by memorization: 16% (i.e., around 2300) of the hypotheses
that are invalid in the new knowledge base are valid
in the old one (see Table 7). The task of deciding
if a hypothesis H is invalid for a knowledge base
_KB amounts to deciding if the set KB ∪{H} is_
consistent. Thus, CNNs learned to recognize consistency of sets of syllogistic formulas far beyond
the training setup. All other architectures obtained
zero accuracy on this task.
Finally, we tested the generalization of basic features of the syllogistic logic as in 4.1. The results
show almost perfect performance for CNNs and
TRAs (see A.7 for details). Interestingly, RNNs exhibit a low level of generalization of the Principle
of Contradiction.
**4.5** **Comparison of architectures**
TRAs do not substantially outperform other architectures. This stands in contrast to presuppositions
(see, e.g., Smolensky et al. (2022)) that it is transformers’ ability to process data in a compositional
manner that explains their successes in real-world
2273
-----
applications. They do perform better on tests related to recursiveness but are below average on our
compositionality tests. More importantly, we never
see qualitative superiority, e.g., tasks on which only
TRAs attain non-zero performance.
RNNs struggle when tested on a new knowledge
base. It is the only architecture that does not generalize the Principle of Contradiction. Moreover,
RNNs’ limited memorization indicates that they
process data differently. CNNs’ almost perfect performance on invalid hypotheses hints that they may
have some interesting distinctive features deserving
of further studies.
MLPs, unsurprisingly, lag behind, but they are
not so much worse. Thus, if understood as a benchmark architecture, their performance indicates that
in terms of capabilities for compositional and recursive aspects of language processing, all the known
deep-learning designs are basically on par – at least
when employing standard training regimes.
**5** **Discussion**
The paper’s main contributions are two-fold:
methodological and experimental.
Studies of logical reasoning in neural networks
usually consider much simpler toy logic examples,
often not even fully recursive, than the experimental setup offered in this paper, cf. Bowman et al.
(2015). On the other hand, articles focusing on various aspects of generalizations, like compositionality, systematicity, or recursiveness, often adopt
empirical frameworks less straightforwardly linked
to reasoning and semantics, cf. Hupkes et al. (2019)
or Lake and Baroni (2023). Moreover, many papers do not distinguish between recursiveness and
compositionality. For example, in Lake and Baroni (2023), a sequence-to-sequence model’s performance on unseen combinations of functions is
tested (see Fig. 2 in the paper); however, it is not
verified whether the model can correctly process
corresponding sub-combinations, which is a necessary condition for compositionality. Similarly,
in Clark et al. (2021), the authors investigate the
generalization of certain rule-based reasonings to
patterns longer than those seen in training (see Table 1). But they do not take into consideration their
internal structure, either.
The current paper proposes solving these problems by a systematic study of reasoning in a natural language fragment Pratt-Hartmann (2004). Our
experiments show that even though the neural net
work models can grasp some elementary aspects of
syllogistic reasoning, they cannot learn the logic’s
fully recursive and compositional nature. They
manifest various aspects of the meaning of involved terms, e.g., the Principle of Contradiction,
non-emptiness of denotations, or the symmetry of
quantifiers. They also exhibit some ability to combine inferences into more complex ones, which
agrees with findings, e.g., from Lake and Baroni
(2023). At the same time, they do not assimilate
the recursive structure of inferences, so high performance on shorter inferences of a given type does
not translate to high performance on longer ones
(see Schlegel et al. (2022) for similar results for
the task of recognizing consistency of a set of formulas). Moreover, the networks do not pass the
compositionality test: they appear to apprehend
complex inferences without apprehending the constituent subinferences. From the semantical perspective, this shows that the models do not understand the meanings of syllogistic formulas because,
ultimately, it is their meanings that determine the
structure of syllogistic inferences.
**5.1** **Acknowledgments**
The research was funded by the National Science
Center, Poland [grant 2020/37/B/HS1/04220]. We
gratefully acknowledge Polish high-performance
computing infrastructure PLGrid (HPC Center:
ACK Cyfronet AGH) for providing computer facilities and support within computational grant
no. PLG/2023/016283. We are also grateful
to Raffaella Bernardi, Jaap Jumelet, Paolo Rota,
Shane Steinert-Threlkeld, Stefano Teso, Kentaro
Yamamoto, the audience of the CLIC seminar at
UniTn, and the workshop on reasoning and neural models at UvA for comments and interesting
discussions.
**5.2** **Limitations**
The syllogistic form is only a small fragment of
the natural language, so our findings are not conclusive with regard to aspects of logical reasoning
that are not present in syllogistic logic. Moreover,
the choice of encodings and the synthetic data constructed for the sake of experiments conducted in
the study further increase the distance of our set-up
from natural language reasoning.
Another limitation is related to the training
regimes employed. Other methods of training neural networks may allow for a higher level of compositional and recursive generalization.
2274
-----
|Inf.|%|Best|Mean|
|---|---|---|---|
|Val. Inv.|16 84|0.8 80.0|0.4 75.4|
|Val. Inv.|16 84|0.2 35.7|0.0 13.4|
|Val. Inv.|16 84|100.0 100.0|88.7 96.6|
Model Inf. % Best Mean SD
Val. 16 0.8 0.4 0.2
MLP
Inv. 84 80.0 75.4 2.6
Val. 16 0.2 0.0 0.0
RNN
Inv. 84 35.7 13.4 7.8
Val. 16 100.0 88.7 11.5
CNN
Inv. 84 100.0 96.6 6.1
Val. 16 1.6 0.8 0.3
TRA
Inv. 84 98.4 97.5 0.6
Table 7: Split accuracy for tests on invalid hypotheses
in a new knowledge base (KB): 16% (appr. 2500) of
hypotheses that are invalid in the new KB were valid in
the KB used for training.
**References**
Samuel R. Bowman, Christopher Potts, and Christo[pher D. Manning. 2015. Recursive neural networks](https://doi.org/10.18653/v1/W15-4002)
[can learn logical semantics. In Proceedings of the](https://doi.org/10.18653/v1/W15-4002)
_3rd Workshop on Continuous Vector Space Models_
_and their Compositionality, pages 12–21, Beijing,_
China. Association for Computational Linguistics.
Peter Clark, Oyvind Tafjord, and Kyle Richardson. 2021.
Transformers as soft reasoners over language. IJ_CAI’20: Proceedings of the Twenty-Ninth Interna-_
_tional Joint Conference on Artificial Intelligence,_
623(537):3882–3890.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
[Kristina Toutanova. 2018. BERT: pre-training of](http://arxiv.org/abs/1810.04805)
[deep bidirectional transformers for language under-](http://arxiv.org/abs/1810.04805)
[standing. CoRR, abs/1810.04805.](http://arxiv.org/abs/1810.04805)
Elad Hoffer, Itay Hubara, and Daniel Soudry. 2017.
[Train longer, generalize better: closing the general-](https://proceedings.neurips.cc/paper_files/paper/2017/file/a5e0ff62be0b08456fc7f1e88812af3d-Paper.pdf)
[ization gap in large batch training of neural networks.](https://proceedings.neurips.cc/paper_files/paper/2017/file/a5e0ff62be0b08456fc7f1e88812af3d-Paper.pdf)
In Advances in Neural Information Processing Sys_tems, volume 30. Curran Associates, Inc._
Dieuwke Hupkes, Verna Dankers, Mathijs Mul, and
[Elia Bruni. 2019. The compositionality of neural](http://arxiv.org/abs/1908.08351)
[networks: integrating symbolism and connectionism.](http://arxiv.org/abs/1908.08351)
_CoRR, abs/1908.08351._
Dieuwke Hupkes, Mario Giulianelli, Verna Dankers,
Mikel Artetxe, Yanai Elazar, Tiago Pimentel, Christos Christodoulopoulos, Karim Lasri, Naomi Saphra,
Arabella Sinclair, et al. 2023. A taxonomy and review
of generalization research in nlp. Nature Machine
_Intelligence, 5(10):1161–1174._
Theo MV Janssen and Barbara H Partee. 1997. Compositionality. In Handbook of logic and language,
pages 417–473. Elsevier.
Brenden M. Lake and Marco Baroni. 2023. Human-like
systematic generalization through a meta-learning
neural network. Nature, 623(7985):115–121.
Jan Łukasiewicz. 1951. Aristotle’s Syllogistic From
_the Standpoint of Modern Formal Logic. Oxford,_
England: Garland.
Santiago Ontanon, Joshua Ainslie, Vaclav Cvicek, and
[Zachary Fisher. 2022. Logicinference: A new dataset](http://arxiv.org/abs/2203.15099)
[for teaching logical inference to seq2seq models.](http://arxiv.org/abs/2203.15099)
Ian Pratt-Hartmann. 2004. [Fragments of language.](https://doi.org/10.1023/b:jlli.0000024735.97006.5a)
_Journal of Logic,_ _Language and Information,_
13(2):207–223.
Viktor Schlegel, Kamen Pavlov, and Ian Pratt-Hartmann.
[2022. Can transformers reason in fragments of natu-](https://doi.org/10.18653/v1/2022.emnlp-main.768)
[ral language? In Proceedings of the 2022 Conference](https://doi.org/10.18653/v1/2022.emnlp-main.768)
_on Empirical Methods in Natural Language Process-_
_ing, pages 11184–11199, Abu Dhabi, United Arab_
Emirates. Association for Computational Linguistics.
[Timothy J. Smiley. 1973. What is a syllogism? Journal](https://doi.org/10.1007/bf02115614)
_of Philosophical Logic, 2(1):136–154._
Paul Smolensky, Richard Thomas McCoy, Roland Fernandez, Matthew Goldrick, and Jianfeng Gao. 2022.
[Neurocompositional computing: From the central](https://doi.org/https://doi.org/10.1002/aaai.12065)
[paradox of cognition to a new generation of ai sys-](https://doi.org/https://doi.org/10.1002/aaai.12065)
[tems. AI Magazine, 43(3):308–322.](https://doi.org/https://doi.org/10.1002/aaai.12065)
**A** **Appendix**
**A.1** **Construction of inferences**
We derived all possible syllogisms from Theorems
1 and 2 as follows: for each antilogism of the form
_F ∪_ _F_, we consider all possible values that F can
have to construct a valid syllogism of the form
_F ⊢_ _F_ . Table 18 summarizes this process for every form of antilogism described in Theorem 1.
Note that from the third form, i.e., {Aa − _b, Ac −_
_d, Iac, Ebd} and {Aa −_ _b, Ac −_ _d, Ica, Ebd}, we_
only describe the former, since the latter is equivalent but with swapping variables. After renaming
variables and removing equivalent syllogisms, the
list from Table 18 boils down to 7 types of valid
inferences presented in Table 1.
**A.2** **Types of encodings**
We experimented with one-hot and word embeddings to encode syllogistic formulas. We picked the
former because it achieves higher accuracy within
our framework. To see a comparison between onehot encoding and word embeddings, we trained a
single knowledge base using both techniques, we
then tested the overall accuracy for valid hypotheses. The results for each architecture are shown
in Figure 8. There is a significant difference in
MLPs and TRAs. RNNs did a much better work
and CNNs seem to be able to handle both types of
2275
-----
|Enc.|Best|Mean|
|---|---|---|
|1-hot emb.|82.8 65.5|82.4 54.9|
|1-hot emb.|86.2 74.9|85.6 74.4|
|1-hot emb.|86.0 84.4|85.4 83.4|
Model Enc. Best Mean SD
1-hot 82.8 82.4 0.3
MLP
emb. 65.5 54.9 9.5
1-hot 86.2 85.6 0.6
RNN
emb. 74.9 74.4 0.4
1-hot 86.0 85.4 0.8
CNN
emb. 84.4 83.4 0.7
1-hot 93.7 91.9 2.1
TRA
emb. 65.6 61.0 5.5
Table 8: Comparison between 1-hot encodings and word
embeddings using a single knowledge base and the same
configuration for each architecture (accuracy for valid
hypotheses)
encoding quite well. For word embeddings, increasing the number of heads in the TRA architecture
also increases the accuracy. But still, they cannot
outperform one-hot representations.
**A.3** **Neural models specification**
We built our models using the TensorFlow library
and Python as a programming language. The gradient descent method we used is the Adam optimization algorithm (for MLP, CNN, and TRA) and its
variant Adamax (for RNN) with a learning rate of
0.001. The number of epochs performed is 350
for transformers and 250 for the rest, and the batch
size for all architectures is 20. The configuration of
layers used for each model is detailed in Table 9.
We performed our experiments using a GPU_A100. The time for training a model varies for_
each architecture and each experiment. A single
run, on average, for MLPs, CNNs, and TRAs takes
between 10 and 20 minutes, whereas for RNNs, it
takes around 60 minutes.
The number of neurons, layers, and other essential hyperparameters were optimized using gridsearching techniques. Our aim was achieve an optimal performance for the overall accuracy test, in
particular for valid inferences. We obtained above
90% of correct predictions for valid and invalid
inferences using mostly default parameters and
keeping the models with simple and general specifications as much as possible. Nevertheless, we
experimented with increasing the number of layers
and units or tweaking other parameters such as the
learning rate, however without seeing any significant improvements. In particular, adding more
layers to RNNs led to the vanishing gradient problem. For CNNs, we also tried different configurations regarding the number of filters, and the sizes
of kernels and poolings. Finally, for transformers, we set up an encoder-only model by mainly
changing the number of attention layers and attention heads. We chose this type of model since our
approach can be seen as a text classification task.
However, for completeness, we also experimented
with encoder-decoder and decoder-only transformers with unsuccessful results.
We also experimented with LSTM and GRU recurrent models. However, the performance was not
superior to RNNs, so we decided to stick with the
latter. Last but not least, we tried fine-tuning techniques and trained our data on pre-trained models
Devlin et al. (2018) but with no success. This type
of encoding could not take apart the hypothesis
from the knowledge base and the dense vectors the
model produced were extremely similar to each
other. As a result, there was no learning at all. We
solved this problem by encoding the knowledge
base and the hypothesis independently, but even
then, the models were not able to outperform the
other architectures.
**A.4** **Overall accuracy by types of inference**
We present the detailed results from the experiment
described in 4.1. Tables 10, 11, 12, and 13 show the
overall performance results by types of inference
for MLPs, RNNs, CNNs, and TRAs, respectively.
The NNM column is the mean percentage of the
model’s output when taking into account all correct predictions, i.e., correct inferences that are not
necessarily minimal. Moreover, in the last column,
we present the average Hamming distance (HD)
between the correct NNM predictions and the labels (the correct answer), i.e., the average number
of premises that are not needed. Note that for all
architectures, this value is smaller than 2, which
means that models (on average) do not select too
many unneeded premises whenever they get the
needed ones.
**A.5** **Generalization gap**
We test on the training data for all architectures to
check the generalization gap, i.e., the difference in
performance on training versus test data. It can be
seen from Table 14 that in this sense the models
generalize very well (compare it with Table 3).
**A.6** **Permutation test**
For this test, we train a model on a knowledge
base 1, and test it on a new knowledge base
_KB_
2. However, we count an output as correct if
_KB_
2276
-----
|Model|Layers|
|---|---|
|MLP|1 Dense layer with 2500 units and tanh activation|
|RNN|2 SimpleRNN layers with 250 units and tanh activation|
|CNN|1 Conv1D layer with 512 filters, a kernel of size 5, and relu activation 1 MaxPooling1D layer with a pool size of 3|
|TRA|1 Embedding layer (to learn the positions of constants and quantifiers) 1 Encoder self-attention layer: 1 MultiHeadAttention layer with 2 heads 1 Feed-forward network (3 hidden Dense layers with 32 units and relu activation) 1 Dense layer with 250 units and tanh activation|
Model Layers
MLP 1 Dense layer with 2500 units and tanh activation
RNN 2 SimpleRNN layers with 250 units and tanh activation
1 Conv1D layer with 512 filters, a kernel of size 5, and relu activation
CNN
1 MaxPooling1D layer with a pool size of 3
1 Embedding layer (to learn the positions of constants and quantifiers)
1 Encoder self-attention layer:
TRA 1 MultiHeadAttention layer with 2 heads
1 Feed-forward network (3 hidden Dense layers with 32 units and relu activation)
1 Dense layer with 250 units and tanh activation
Table 9: Layers used in all architectures
|Best|Mean|SD|NNM|
|---|---|---|---|
|80.6 92.0 99.2 93.0 100.0 100.0 99.4|46.8 80.9 89.6 78.9 95.0 89.5 88.1|17.9 13.2 8.1 15.9 8.3 11.2 15.4|55.5 83.3 92.9 88.7 96.8 90.0 93.4|
|93.9 97.1|83.2 94.2|13.1 2.5|88.9 –|
Inf. Best Mean SD NNM HD
1 80.6 46.8 17.9 55.5 1.2
2 92.0 80.9 13.2 83.3 1.1
3 99.2 89.6 8.1 92.9 1.0
4 93.0 78.9 15.9 88.7 1.1
5 100.0 95.0 8.3 96.8 1.0
6 100.0 89.5 11.2 90.0 1.0
7 99.4 88.1 15.4 93.4 1.0
Val. 93.9 83.2 13.1 88.9 1.1
Inv. 97.1 94.2 2.5 – –
All 96.6 93.5 3.1 – –
Table 10: Overall accuracy for MLP
|Best|Mean|SD|NNM|
|---|---|---|---|
|77.8 96.5 99.3 96.1 100.0 100.0 99.4|58.6 92.1 96.6 92.0 99.5 97.7 97.9|9.7 3.6 1.8 2.2 1.2 2.0 1.1|60.7 92.8 96.9 96.1 99.5 97.7 98.9|
|95.9 98.3|93.5 97.7|1.3 0.5|95.3 –|
All 98.0 97.4 0.4 – –
Table 11: Overall accuracy for RNN
|Best|Mean|SD|NNM|
|---|---|---|---|
|78.4 96.6 96.4 93.0 100.0 100.0 100.0|58.3 92.5 92.3 90.6 97.9 93.3 98.3|11.3 2.5 3.4 1.4 2.9 5.8 1.3|64.3 93.1 92.5 96.2 98.4 93.3 99.0|
|94.3 97.3|92.0 96.7|1.3 0.3|94.4 –|
Inf. Best Mean SD NNM HD
1 78.4 58.3 11.3 64.3 1.7
2 96.6 92.5 2.5 93.1 1.2
3 96.4 92.3 3.4 92.5 1.0
4 93.0 90.6 1.4 96.2 1.1
5 100.0 97.9 2.9 98.4 1.0
6 100.0 93.3 5.8 93.3 0.0
7 100.0 98.3 1.3 99.0 1.0
Val. 94.3 92.0 1.3 94.4 1.2
Inv. 97.3 96.7 0.3 – –
All 96.9 96.4 0.2 – –
Table 12: Overall accuracy for CNN
|Best|Mean|SD|NNM|
|---|---|---|---|
|74.1 96.5 96.4 99.6 100.0 100.0 98.8|62.8 90.8 92.6 96.1 94.3 98.6 96.0|8.4 3.6 2.8 3.7 5.2 2.3 3.1|72.1 92.9 94.8 97.9 97.0 99.3 97.5|
|96.6 97.8|93.6 96.3|2.9 1.3|95.7 –|
All 97.7 96.1 1.3 – –
Table 13: Overall accuracy for TRA
2277
-----
|Inf.|Best|Mean|
|---|---|---|
|Val. Inv. All|98.9 99.6 99.3|90.5 98.4 95.2|
|Val. Inv. All|99.5 99.9 99.6|99.0 99.7 99.4|
|Val. Inv. All|99.4 99.8 99.6|98.8 99.7 99.3|
Model Inf. Best Mean SD
Val. 98.9 90.5 10.1
MLP Inv. 99.6 98.4 1.3
All 99.3 95.2 4.6
Val. 99.5 99.0 0.3
RNN Inv. 99.9 99.7 0.1
All 99.6 99.4 0.1
Val. 99.4 98.8 0.3
CNN Inv. 99.8 99.7 0.1
All 99.6 99.3 0.1
Val. 99.4 97.8 2.1
TRA Inv. 99.9 99.3 0.8
All 99.6 98.7 1.2
Table 14: Test on the same data used for training
|Inf.|Best|Mean|
|---|---|---|
|Val. Inv. All|66.2 77.4 75.1|61.0 72.7 70.6|
|Val. Inv. All|14.3 30.6 27.6|4.3 11.4 10.1|
|Val. Inv. All|73.2 84.3 82.2|68.4 81.0 78.7|
Model Inf. Best Mean SD
Val. 66.2 61.0 3.7
MLP Inv. 77.4 72.7 2.7
All 75.1 70.6 2.9
Val. 14.3 4.3 3.1
RNN Inv. 30.6 11.4 6.7
All 27.6 10.1 6.0
Val. 73.2 68.4 8.2
CNN Inv. 84.3 81.0 5.0
All 82.2 78.7 5.6
Val. 98.3 97.4 0.6
TRA Inv. 98.0 97.4 0.5
All 98.0 97.4 0.4
Table 15: Permutation test on new knowledge bases
it is correct for 1. High performance on this
_KB_
test indicates that the model memorized the training base 1, and ignores the part of the input
_KB_
corresponding to 2.
_KB_
We selected 3 knowledge bases and performed 6
tests, i.e., trained models were tested on the other
2 knowledge bases. Table 15 shows the accuracies calculated in the way described above. TRAs
memorize the training base almost perfectly, while
RNNs do not memorize in this way.
**A.7** **Principle of Contradiction, non-emptiness**
**of denotations, symmetry**
For these tests, we search the output for the
following pairs of hypotheses: (1) {H, H};
(2) {Aab, Iab}; (3) {Iab, Iba} and {Eab, Eba}.
Then we calculate the percentage of pairs that
confirm (1) Principle of Contradiction (2) Nonemptyness of denotations (i.e., Aab implies Iab),
and (3) symmetry of formulas Iab, Eab. The re
|Pair|Highest|Mean|
|---|---|---|
|(1) (2) (3)|94.8 100.0 93.6|92.9 100.0 92.3|
|(1) (2) (3)|75.7 100.0 98.3|59.8 100.0 94.2|
|(1) (2) (3)|100.0 100.0 100.0|99.3 99.7 98.5|
Model Pair Highest Mean SD
(1) 94.8 92.9 1.1
MLP (2) 100.0 100.0 0.1
(3) 93.6 92.3 0.7
(1) 75.7 59.8 5.6
RNN (2) 100.0 100.0 0.0
(3) 98.3 94.2 1.9
(1) 100.0 99.3 1.5
CNN (2) 100.0 99.7 0.4
(3) 100.0 98.5 1.4
(1) 99.9 99.7 0.2
TRA (2) 100.0 99.8 0.2
(3) 99.8 99.6 0.2
Table 16: Pairs on new KBs. (1) valid/invalid {H, H},
(2) valid/valid {Aab, Iab}, (3) valid/valid {Iab, Iba},
_{Eab, Eba}_
|Pair|Highest|Mean|
|---|---|---|
|(1) (2) (3)|99.8 100.0 98.5|99.4 99.9 97.6|
|(1) (2) (3)|99.9 100.0 99.7|99.9 100.0 99.2|
|(1) (2) (3)|99.7 100.0 99.1|99.7 100.0 98.9|
Model Pair Highest Mean SD
(1) 99.8 99.4 0.3
MLP (2) 100.0 99.9 0.1
(3) 98.5 97.6 0.8
(1) 99.9 99.9 0.0
RNN (2) 100.0 100.0 0.0
(3) 99.7 99.2 0.2
(1) 99.7 99.7 0.1
CNN (2) 100.0 100.0 0.0
(3) 99.1 98.9 0.2
(1) 100.0 99.8 0.2
TRA (2) 100.0 99.7 0.1
(3) 99.8 99.4 0.5
Table 17: Pairs on test data. (1) valid/invalid {H, H},
(2) valid/valid {Aab, Iab}, (3) valid/valid {Iab, Iba},
_{Eab, Eba}_
sults are shown in Table 16 for tests on a new knowledge base, and Table 17 for tests on the same knowledge base (i.e., its own test dataset). Apparently,
RNNs poorly generalize Principle of Contradiction
on a new knowledge base.
2278
-----
|F F ∪{ }|F|F F ⊢|
|---|---|---|
|Aa b, Oab { − }|Aab Aax 1 Ax x (1) i i+1 Ax b m Oab|Oab Oab { } ⊢ Ax b, Oab Oax { 1 − } ⊢ 1 Aa x, Ax b, Oab Ox x { − i i+1 − } ⊢ i i+1 Aa x, Oab Ox b { − m } ⊢ m Aa b Aab { − } ⊢|
|Aa b, Aa c, Ebc { − − }|Aab Aac Aax 1 Aay 1 Ax x (1) i i+1 Ay y (2) i i+1 Ax b m Ay c n Ebc|Aa c, Ebc Oab { − } ⊢ Aa b, Ebc Oac { − } ⊢ Ax b, Aa c, Ebc Oax { 1 − − } ⊢ 1 Aa b, Ay c, Ebc Oay { − 1 − } ⊢ 1 Aa x, Ax b, Aa c, Ebc Ox x { − i i+1 − − } ⊢ i i+1 Aa b, Aa y, Ay c, Ebc Oy y { − − i i+1 − } ⊢ i i+1 Aa x, Aa c, Ebc Ox b { − m − } ⊢ m Aa b, Aa y, Ebc Oy c { − − n } ⊢ n Aa b, Aa c Ibc { − − } ⊢|
|Aa b, Ac d, Iac, Ebd { − − }|Aab Acd Aax 1 Acy 1 Ax x (1) i i+1 Ay y (2) i i+1 Ax b m Ay d n Iac Ebd|Ac d, Iac, Ebd Oab { − } ⊢ Aa b, Iac, Ebd Ocd { − } ⊢ Ax b, Ac d, Iac, Ebd Oax { 1 − − } ⊢ 1 Aa b, Ay d, Iac, Ebd Ocy { − 1 − } ⊢ 1 Aa x, Ax b, Ac d, Iac, Ebd Ox x { − i i+1 − − } ⊢ i i+1 Aa b, Ac y, Ay d, Iac, Ebd Oy y { − − i i+1 − } ⊢ i i+1 Aa x, Ac d, Iac, Ebd Ox b { − m − } ⊢ m Aa b, Ac y, Iac, Ebd Oy d { − − n } ⊢ n Aa b, Ac d, Ebd Eac { − − } ⊢ Aa b, Ac d, Iac Ibd { − − } ⊢|
Table 18: Construction of inferences
2279
(1)∀i.1 ≤ _i < m_ (2)∀i.1 ≤ _i < n_
-----
| [
"Manuel, Vargas Guzmán",
"Jakub, Szymanik",
"Maciej, Malicki",
"Kevin, Duh",
"Helena, Gomez",
"Steven, Bethard"
] | 2024-06-01T00:00:00 | NAACL 2024 Findings | false | 1 | 0 | null | https://aclanthology.org/2024.findings-naacl.147 | null | https://www.semanticscholar.org/paper/016d9d932a9872bc61c797aeb02fae62cf0b5777 |
Trajectory Volatility for Out-of-Distribution Detection in Mathematical Reasoning | Real-world data deviating from the independent and identically distributed (\textit{i.i.d.}) assumption of in-distribution training data poses security threats to deep networks, thus advancing out-of-distribution (OOD) detection algorithms. Detection methods in generative language models (GLMs) mainly focus on uncertainty estimation and embedding distance measurement, with the latter proven to be most effective in traditional linguistic tasks like summarization and translation. However, another complex generative scenario mathematical reasoning poses significant challenges to embedding-based methods due to its high-density feature of output spaces, but this feature causes larger discrepancies in the embedding shift trajectory between different samples in latent spaces. Hence, we propose a trajectory-based method TV score, which uses trajectory volatility for OOD detection in mathematical reasoning. Experiments show that our method outperforms all traditional algorithms on GLMs under mathematical reasoning scenarios and can be extended to more applications with high-density features in output spaces, such as multiple-choice questions. | A trajectory-based method TV score is proposed, which uses trajectory volatility for OOD detection in mathematical reasoning and outperforms all traditional algorithms on GLMs under mathematical reasoning scenarios and can be extended to more applications with high-density features in output spaces, such as multiple-choice questions. | ## Trajectory Volatility for Out-of-Distribution Detection in Mathematical Reasoning
**Yiming Wang[α]** **Pei Zhang[β,γ]** **Baosong Yang[β]** **Derek F. Wong[γ]**
**Zhuosheng Zhang[α]** **Rui Wang[α]**
_αShanghai Jiao Tong University_ _βAlibaba Inc._
_γNLP2CT Lab, University of Macau_
**Abstract**
Real-world data deviating from the independent and identically distributed (i.i.d.)
assumption of in-distribution training data poses security threats to deep networks,
thus advancing out-of-distribution (OOD) detection algorithms. Detection methods
in generative language models (GLMs) mainly focus on uncertainty estimation and
embedding distance measurement, with the latter proven to be most effective in
traditional linguistic tasks like summarization and translation. However, another
complex generative scenario mathematical reasoning poses significant challenges
to embedding-based methods due to its high-density feature of output spaces, but
this feature causes larger discrepancies in the embedding shift trajectory between
different samples in latent spaces. Hence, we propose a trajectory-based method TV
score, which uses trajectory volatility for OOD detection in mathematical reasoning.
Experiments show that our method outperforms all traditional algorithms on GLMs
under mathematical reasoning scenarios and can be extended to more applications
with high-density features in output spaces, such as multiple-choice questions.
**Input** **Output** **Input** **Output**
**Algebra** **Bible**
**Geometry** **WMT News**
**Number Theory** **TED Talk**
**Precalculus** **Health**
**Algebra** **WMT News**
Output: 1Input: Suppose that 𝑓𝑓 is a function and 𝑓𝑓[−1] is the inverse of 𝑓𝑓. If 𝑓𝑓(1) = 2, 𝑓𝑓(2) = 6, and 𝑓𝑓(3) = 5, then what is 𝑓𝑓[−1](𝑓𝑓[−1](6))? Input: Supporters say the tunnels would benefit the environment and offer Californians a more secure water supply.Output:支持者们表示,这两条隧道将使环境受益并帮助加利福尼亚州确保供水更加安全。
Input: The product of the proper positive integer factors of Number Theory 𝑛𝑛 can be written as 𝑛𝑛𝑎𝑎𝑎𝑎+𝑏𝑏𝑐𝑐, where 𝑥𝑥 is the number of positive divisors 𝑛𝑛 **Health**
has, 𝑐𝑐 is a positive integer, and the greatest common factor of the three integers 𝑎𝑎, 𝑏𝑏, and 𝑐𝑐 is 1. What is 𝑎𝑎+ 𝑏𝑏+ 𝑐𝑐? Input: The White House Coronavirus Task Force was established on January 29.
Output: 1 Output:白宫冠状病毒工作组是成立于2020年1月29日的以应对美国COVID-19疫情的工作组。
(a) Mathematical Reasoning
(b) Text Generation (Translation)
Figure 1: Illustration of the comparisons of input/output space between mathematical reasoning and
text generation scenarios. We select the MATH [7] dataset for mathematical reasoning and the OPUS
[52] for text generation (translation). We use SimCSE [11] to obtain sentence embeddings and project
them via UMAP [38]. Different colors represent different domains, and lighter and darker shades
of the same color represent the input and output spaces, respectively. We also present input/output
examples of two domains in each scenario. Appendix B.1 and B.2 show detailed settings and full
examples. Projections show that input/output space overlapping of different domains under the
reasoning scenario is more obvious, especially in output spaces where high-density collapse happens.
Email: [email protected] [Code: https://github.com/alsace08/OOD-Math-Reasoning](https://github.com/alsace08/OOD-Math-Reasoning)
-----
**1** **Introduction**
The rapid development of generative language models (GLMs) [43, 44, 6, 2, 53] has empowered
them to fit diverse and challenging datasets, showing strong generalization over in-distribution
(ID) test data satisfying the independent and identically distributed (i.i.d.) assumption. However,
unconstrained inputs in real-world settings frequently trigger distributional drifts, called out-ofdistribution (OOD) data. In such scenarios, model performance often deteriorates unexpectedly,
yielding harmful outcomes. Thus, OOD detection [41, 4] is critical in safeguarding model security.
Based on the availability of OOD data, detection methods can be classified into two categories.
When OOD data is available, constructing a binary classifier with ID and OOD classes [9, 24, 19]
is a standard method. However, the OOD data variety in real-world settings makes it impractical
for classifiers to learn all OOD data features, causing scalability challenges. Thus, we focus on
more generalized scenarios where OOD data is unavailable, and methods can be categorized into
three main types: (1) Output-based methods assess confidence by analyzing predicted probabilities
[14, 5, 34, 55, 30]; (2) Ensemble-based methods assess the uncertainty through a collection of
supporting models [10, 22]; (3) Feature-based methods assess the Mahalanobis Distance between
OOD samples and ID data distribution in the feature space [26, 45, 51]. Besides, there are some
fragmented methods, such as gradient methods [27, 17] and auto-encoder reconstruction methods
[8, 18]. However, these fragmented methods encounter optimization and computational complexity
issues, leading to significant performance bottlenecks [58], so they receive relatively less attention.
Most of the above methods were proposed for classification tasks, while the research on deploying
OOD detection methods on GLMs is relatively niche. Existing methods on GLMs focus on
uncertainty estimation [37, 35] and embedding distance measure [46], and [46] has demonstrated
that embedding-based methods are currently the only optimal solution for text generation scenarios
such as summarization and translation. However, embedding-based methods encounter challenges
under mathematical reasoning scenarios, as shown in Figure 1: (1) Input Space: In the text generation
scenarios, input questions from different domains show more obvious clustering features, yet that
under mathematical reasoning are not well distinguished. We will delve deeper into this issue in
Appendix G.2; (2) Output Space: The output space shows high density under mathematical reasoning,
we call this phenomenon “pattern collapse”. This is because the output space of mathematical
reasoning is scalar, resulting in a compressed search space with a higher probability of overlap in two
widely divergent questions. For example, “1 + 3 =” and “∫13 _[x][d][x][ =][” are both “][4][”. Therefore, current]_
methods suffer serious performance bottlenecks in mathematical reasoning scenarios.
Therefore, we transform our perspective from the static embedding representation to the dynamic
embedding shift trajectory in the latent space and make the following findings: (1) Due to the pattern
collapse property in the output space, trajectories corresponding to different samples are more likely
to differentiate due to the hard constraints on the endpoints of trajectories compared to text generation
scenarios. This theoretical perspective will be presented in Sec.2.1; (2) There exists early stabilization
of GLMs for mathematical reasoning. For ID samples, GLMs have completed the main reasoning at a
later stage, but this is not the case for the OOD samples. This empirical perspective will be presented
in Sec.2.2. These indicate that trajectory is a promising tool to distinguish ID and OOD data.
Based on these analyses, we propose the Trajectory Volatility detection algorithm (TV score) for
mathematical reasoning. We conduct experiments on mainstream mathematical reasoning datasets
and GLMs for the basic OOD detection, and also consider the OOD quality estimation scenario,
which imposes more fine-grained requirements on the OOD scores. Results show that our method
outperforms all traditional algorithms on GLMs under the mathematical reasoning scenario.
Overall, our contribution is progressively three-fold:
- We find phenomena of GLMs under mathematical reasoning about embedding low discrepancy in
the input space, pattern collapse in the output space, and early stabilization in the latent space.
- Based on the excavated phenomena, we verify the rationality of using trajectory as OOD detection
from empirical and theoretical perspectives and propose a novel detection algorithm TV Score,
which distinguishes ID and OOD samples by the embedding shift trajectory volatility.
- Our method significantly outperforms traditional algorithms on GLMs under mathematical
reasoning in diverse evaluation scenarios, including offline detection, online discrimination, and
quality estimation, and it can be extended to more applications like multiple-choice questions.
-----
**1.1** **Problem Statement**
Mathematical reasoning relies on generative language models (GLMs). Let X be the input space and
be the output space. P _,_ is the joint data distribution defined over ×, P and P are the
_Y_ _X_ _Y_ _X_ _Y_ _X_ _Y_
marginal distributions for inputs and outputs, respectively. GLMs are trained given input sequence x =
_x1x2...xt ∼_ _PX of length t to autoregressively generate the next token in the corresponding output_
sequence y = y1y2...yn ∼ _PY of length n over the likelihood model pθ_ **_y_** **_x_** = ∏i[n]=1 _[p][θ][(][y][i][∣][y]≺i[,][ x][)][,]_
where each xi and yi are d-dimensional vectors and θ is sampled from the parameter space Θ.
Assume that _P_ _,_ denote a distribution sufficiently different from P _,(_, the goal of OOD detection∣ )
[̃]X _Y_ _X_ _Y_
in GLMs is to find a score function f **_x, y, θ_** for each sample and a threshold ϵ, which may rely on
the features of X, Y, and Θ, to achieve a high discrimination accuracy goal:
( )
maxf P **_x,y_** ∼PX _,Y_ _f_ **_x, y, θ_** < ϵ + P **_x ̃_** _, ̃y_ ∼P[̃]X _,Y_ **_x, ̃y, θ_** > ϵ _,_ (1)
( ) ( )
and a minimum expected risk where for any two different abnormal distributions [ ( ) ] [[][f] [(][ ̃] ) _P[̃]]X[′]_ _,Y_ [and][ ̃]PX[′′] _,Y_ [:]
minf _PX[′]_ _,Y_ **_x, ̃y_** ∼P[̃]X[′′] _,Y_ **_x, ̃y, θ_** > ϵ _[.]_ (2)
This is a robust goal but tough to achieve in practice, so the first goal is all current work’s main goal.»»»»»[P][(][x][,][y][)][∼] [̃] [[][f] [(][x][,][ y][,][ θ][)][ >][ ϵ][]][ −] [P][(][ ̃] ) [[][f] [(][ ̃] ) ]»[»]»»»
**1.2** **Notation and Definition**
For a given sample si, **_yi_** _l ∈_ R[d] 0 ≤ _l ≤_ _L_ denote the hidden embedding of the l-th layer,
where d is the embedding dimension and yi _l =_ _yi1_ _l,_ _yi2_ _l, ...,_ _yid_ _l_ . We set the embedding
coordinate of the input space to [ ] 0, **_y(i_** 0, the output space to () _i.e., the last hidden layer)_ _L,_ **_yi_** _L_,
and the L − 1 hidden layers to _l,_ **_yi_** _l_ _l=]1_ [.] ([ ] [ ] [ ] )[⊤]
( [ ] ) ( [ ] )
**Definition 1 (Component-independent Trajectory Volatility) For sample si, its Component-**
{( [ ] )}[L][−][1]
_independent Trajectory Volatility during all L hidden layers is defined as the embedding difference_
_between all neighboring layers:_
_L−1_
1
**_V_** _si_ = _yi1_ _l+1 −_ _yi1_ _l_ _,_ _yi2_ _l+1 −_ _yi2_ _l_ _, ...,_ _yid_ _l+1 −_ _yid_ _l_ _._ (3)
_L −_ 1 ∑
_l=1_
( ) (∣[ ] [ ] ∣ ∣[ ] [ ] ∣ ∣[ ] [ ] ∣)[⊤]
Higher values of the d-th component of V _si_ indicate increased instability of the d-th dimension in
the inference process for the sample si.
( )
**2** **Motivation: Why Trajectory As The Measure?**
**2.1** **Likelihood-High Trajectory Variation: A Macro Theoretical Perspective**
Figure 2: The “pattern collapse” phenomenon only exists in mathematical reasoning scenarios, where
**Mathematical Reasoning** **Text Generation (e.g. Translation)**
𝜙𝜙𝑙𝑙 ⋅: Embedding in 𝑙𝑙-th layer _Output Space_ _Output Space_
𝑠𝑠𝑖𝑖, 𝑠𝑠𝑗𝑗[: samples in the dataset]
𝜙𝜙𝐿𝐿/2 𝑠𝑠𝑗𝑗 𝜙𝜙𝐿𝐿 𝑠𝑠𝑖𝑖 − 𝜙𝜙𝐿𝐿 𝑠𝑠𝑗𝑗 2 [≈0] 𝜙𝜙𝐿𝐿/2 𝑠𝑠𝑗𝑗
_Input Space_ _Input Space_
𝜙𝜙𝐿𝐿 𝑠𝑠𝑖𝑖 − 𝜙𝜙𝐿𝐿 𝑠𝑠𝑗𝑗 2 [≫0]
𝜙𝜙0 𝑠𝑠𝑖𝑖 − 𝜙𝜙0 𝑠𝑠𝑗𝑗 2 [≫0] 𝜙𝜙𝐿𝐿/2 𝑠𝑠𝑖𝑖 𝜙𝜙0 𝑠𝑠𝑖𝑖 − 𝜙𝜙0 𝑠𝑠𝑗𝑗 2 [≫0] 𝜙𝜙𝐿𝐿/2 𝑠𝑠𝑖𝑖
two samples initially distant in distance will converge approximately at the endpoint after undergoing
embedding shifts, and does not occur in text generation scenarios. This produces a greater likelihood
of trajectory variation under different samples in mathematical reasoning.
-----
Figure 1 has illustrated the phenomenon of pattern collapse within the output space in mathematical
reasoning scenarios, and the scalar nature of the output space makes the phenomenon intuitive.
However, this special phenomenon makes detecting OOD with trajectories in mathematical reasoning
scenarios more promising. We abstract this phenomenon in Figure 2, which compares the trajectory
trend between different samples in the mathematical reasoning and text generation scenarios.
In mathematical reasoning, when two initial points are separated by any distance in the input space,
they typically converge to a significantly closer distance in the output space after undergoing an
embedding shift. However, in text generation, outputs from different samples may not exhibit this
same convergence. Constraints on trajectory endpoints in mathematical reasoning allow for a
**greater likelihood of variation in trajectory volatility under different samples. Based on this**
intuition, we propose a key hypothesis:
**Hypothesis 1 In scenarios characterized by pattern collapse in the output space, the probability of**
_variations in the volatility of embedding shift trajectories across samples increases._
This hypothesis reflects that we are more likely to distinguish between ID and OOD samples
by the trajectory volatility in mathematical reasoning than in text generation. We specify that
**_yi_** _L ∼_ _N_ **_c, Σ[2]_** . According to the pattern collapse properties under different tasks, we can
constraint the endpoint embedding **_yi_** _L in the output space:_
[ ] ( )
- For mathematical reasoning with pattern collapse, [ ] Σ → **_O, so we approximate that_** **_yi_** _L ≡_ **_c;_**
- For text generation without pattern collapse, Σ = diag _δ1, δ2, ⋯, δD_ ≠ **_O, so_** **_yi_** _L ≢_ **_c._**
[ ]
With such constraints, we model our key hypothesis as a main theorem:
( ) [ ]
**Theorem 2.1 (Main Theorem) We assume that** **_yi_** _l_ _l=1_ _[are all independent variables sampling]_
_from d−dimemsional vector space R[d]. For different samples si and sj, the likelihood of variations in_
_trajectory volatility under mathematical reasoning scenarios is higher than that under text generation {[_ ] }[L]
_scenarios, which means:_
E **_yi_** _l_ _l=1[,][ {[][y]j_ []][l][}][L]l=1 [∼] _[U]_ [(][R][d][)][ {][V][ (][s][i][)][ −] **_[V][ (][s][j][)][ ≠]_** **[0][∣[][y][i][]][L][,][ [][y][j][]][L][ ≡]** **_[c][}]_**
> E{[yi]l}l[L]=1[,][ {[][y]j []][l][}][L]l=1 [∼] _[U]_ [(][R][d][)][ {][V][ (][s][i][)][ −] **_[V][ (][s][j][)][ ≠]_** **[0][∣[][y][i][]][L][,][ [][y][j][]][L][ ∼]** _[N]_ [(][c][,][ Σ][2][)}]
{[ ] }[L]
Due to space limits, We present the complete Theoretical Proof in Appendix A. This proof can
demonstrate that learning trajectories are prone to diverge between distinct samples when the output
space experiences pattern collapse. This divergence increases the likelihood of discerning samples
with varying features during reasoning tasks, as observed from the perspective of learning trajectories.
**2.2** **Early Stabilization: A Micro Empirical Perspective**
Figure 1 shows the minimal discrepancy
in mathematical reasoning data’s input
and output representations across various
domains. This poses a significant risk to the
efficacy of the current optimal embeddingbased detection methods. However, these
methods solely focus on the static nature
of the embedding, neglecting its dynamic
evolution within the latent space.
50
40
30
20
10
From this point, we first visualize the L1- 0 5 10 15 20 25 30
|Col1|In-Distribution (ID)|Col3|
|---|---|---|
|Out-of-Distribution (OOD)|Out-of-Distribution (OOD||
||||
||||
||||
||||
||||
In-Distribution (ID)
Out-of-Distribution (OOD)
Layer Number
norm of difference **_yl −_** **_yl−1_** 1 between
Figure 3: Change curves of embedding differences
the output yl of each hidden layer and the
between neighboring hidden layers after ID and OOD
output yl−1 of the previous layer under ∥ ∥ data are passed into GLMs (Llama-7B in this figure).
Llama-7B (32 layers), after ID and OOD
data are input into the model, separately. Figure 3 shows the change curves within all hidden layers.
We find that the embedding changes slightly until the 20th layer, and then enters the critical inference
process after 20 layers. However, for ID data, the magnitude of embedding change is again suppressed
after a few layers of inference, while for OOD data, the magnitude is maintained at a high level.
-----
We refer to this phenomenon as Early Stabilization under mathematical reasoning: For ID data,
GLMs largely complete their reasoning in the mid-to-late stages, and simple adjustments are sufficient
after that. However, for OOD data, GLMs can still not complete accurate reasoning at a later stage.
They thus can only randomly switch to a specific output pattern, i.e., the scalar immediate number
pattern. This enables an intuition that leverages the trajectory for OOD detection may be effective.
**3** **TV Score: Trajectory Volatility for OOD Detection**
In this section, we propose our trajectory-based algorithm TV Score to detect OOD samples in
mathematical reasoning scenarios. Formally, for each input sample sequence x = x1x2...xn, the
hidden embedding of i-th layer corresponding to the i-th token xi is denoted as h[l]i [∈] [R][d][. We set][ n]l
as the sequence length in l-th layer, and yl to be the average embedding in the l-th layer, namely
_nl_
**_yl =_** _n[1]l_ ∑ **_h[l]i_** 1 ≤ _i ≤_ _nl, 1 ≤_ _l ≤_ _L_ _._ (4)
_i=1_
Next, we need to obtain the trajectory volatility( Vl in each layer, then computer the final TV Score as:)
_L−1_
TV ∶=
_Vl._ (5)
∑
_l=1_
_L −_ 1
Therefore, defining the trajectory volatility Vl stands as a key concern. The volatility between each
neighboring layer-pair can be intuitively defined by the difference between the two embedding
vectors, so the L1-norm **_yl −_** **_yl−1_** 1 is the simplest way. However, anomalous changes in each
dimension under vector volatility will greatly affect the measure of volatility, introducing potentially
uncontrollable factors. To address the challenge, we want to find the most critical principal component
∥ ∥
that maps the vector in low dimensions.
We consider using Mahalanobis Distance (MaDis) [26] to complete this mapping. We follow the
assumptions of previous work [3, 46] and fit all average embedding yl of ID samples in each layer
to a Gaussian distribution Gl = N **_µl, Σl_** . For a new sample, We measure the MaDis between its
hidden embedding yl of i-th layer and Gl, the conversion function f **_yl_** ∶ R[d] → R is
( )
_f_ **_yl_** = **_yl −_** **_µl_** Σl **_yl −_** **_µl_** 1 ≤ _l ≤_ _L_ (6)
( )
The underlying assumption is that when the MaDis between embeddings and corresponding Gaussian
distribution of ID data in one layer and that of the neighboring layer has a larger difference, the( ) ( )[⊤] ( )[−][1] ( ) ( )
embedding shift between two layers is more volatile. In this case, Vl is defined as:
_Vl =_ _f_ **_yl_** − _f_ **_yl−1_** _._ (7)
**Differential Smoothing (DiSmo).** Outliers in the trajectory may significantly impact feature
∣ ( ) ( )∣
extraction [25, 32] then lead to errors. To mitigate this, we explore higher-order differential smoothing
techniques to enhance trajectory smoothness. We first define the Gaussian distribution difference and
embedding difference based on the additivity of the Gaussian distribution as
_N_ **_µlk_** _, Σlk_ = ∆[(][k][)]Gl = ∆[(][k][−][1][)]Gl+1 − ∆[(][k][−][1][)]Gl = ∆[(][k][−][2][)]Gl+2 − 2 ⋅ ∆[(][k][−][2][)]Gl+1 + ∆[(][k][−][2][)]Gl
( ) ( ) _k_ _k_
( ) = ⋯ = −1 C[i]k[µ]l+k[,] C[i]k[Σ]l+k[)][,]
_N_ ∑ ∑
_i=0_ _i=0_
(8)
( ( )[k][+][i] _k_
**_ylk_** = yl+k−1 1 − **_ylk−1_** = ⋯ = −1 C[i]k[y]l+k[,] (9)
∑
( ) ( ) ( ) _i=0_
where C[i]k [=] _i!_ _kk−!_ _i_ ! [. Similarly, we computer][ k][-th order embedding difference MaDis:]( )[k][+][i]
_k_ _k_ ⊤ _k_ −1 _k_ _k_
( ) _f_ **_yl_** = **_yl_** − **_µl_** Σl **_yl_** − **_µl_** _,_ (10)
( ) ( ) ( ) ( ) ( )
and in this case, Vl is defined as:[(][k][)]
( ) [ ] [ ] [ ]
_Vl =_ _[.]_ (11)
The complete pseudo-code is presented in Appendix C. In experiments, we choosek is too large, the over-smoothing phenomenon may happen and effective features will be eliminated.»[»]»»»[f][ (][k][)][(][y][l][)][ −] _[f][ (][k][)][(][y][l][−][1][)»]»»»»_ _k ≤_ 5 because if
-----
**4** **Experiments**
**4.1** **Setup**
**Dataset Selection.** For the ID dataset, we use the MultiArith [47], which consists of Math Word
Problems on arithmetic reasoning. For the OOD datasets, we intuitively introduce two types of
detection scenarios following [46]: (i) Far-shift OOD setting, we select the MATH [13] dataset with
five domains of algebra, geometry, counting and probability, number theory, and precalculus; (ii)
_Near-shift OOD setting, we select five independent datasets: GSM8K [7], SVAMP [42], AddSub [15],_
SingleEq [21], and SingleOp [21]. We consider the ID data negative(-) and the OOD data positive(+).
Refer to Appendix D.1 for basic information and OOD features of these datasets.
**Data Split and Sampling.** Given the limited data size of MultiArith, totaling only 600 samples
and lacking a standard division, we allocate 360 samples for training and 240 for testing. However,
with such a small test set, randomness in evaluation becomes a concern. To mitigate this, we conduct
test sampling and set the sampling size as 1000. Specifically, we denote ID dataset as Din and OOD
dataset as Dout. For each sampling, the collection is _Din,_ _D[̃]out_ where _D[̃]out ⊂_ _Dout and_ _Din_ = _Dout_,
this guarantees positive and negative sample balance. We report both the mean and standard variance
of the results to enhance the reliability of evaluations. Refer to Appendix D.2 for the ID dataset split.
{ } ∣ ∣ ∣ [̃] ∣
**Implementation.** _To measure the application value of our method used in cutting-edge GLMs,_
we use Llama2-7B [53] and GPT-2-XL (1.5B) [6] as our backbones for ID dataset training. Refer
to Appendix D.3 for training details. However, there exists uncertainty about the data used in the
pre-training phase, especially for Llama2 because its data is closed-source. Some research [57, 59]
have confirmed the absence of data leakage in Llama2 for the MATH and GSM8K datasets, we
still conduct pre-experiments to examine the rationality of the OOD data selection rigorously. Our
criterion is the claim that a dataset can be categorized as OOD if it exceeds the capabilities of the base
model, as proposed in prior studies [50, 31]. Results of the pre-experiments are shown in Appendix
D.4, and they can confirm that these datasets can be considered as OOD data for the two GLMs.
**Baseline.** We compare our method with some training-free baselines[1] where OOD training data
are unavailable: (1) Maximum Softmax Probability (MS-Prob) [14]; (2) Monte-Carlo Dropout
(MC-Drop) [10]; (3) Sequence Perplexity (PPL) [3]; (4) Input Embedding (I-Emb) [46]; (5) Output
Embedding (O-Emb) [46]. Additionally, we set the smoothing order k ranges from 1 to 5 and report
the highest among them when with smoothing. Refer to Appendix D.5 for baseline details.
**Evaluation.** We establish two main evaluation scenarios: (i) The basic OOD detection scenario, we
further divide it into offline and online discriminations; (ii) More challenging and fine-grained OOD
quality estimation. Task-specific metrics are presented in the corresponding result sub-sections.
**4.2** **Scenario I: OOD Detection**
**Offline Discrimination.** We utilize TV scores for offline OOD discrimination. For each collection
in, out, we report the AUROC [39] and FPR95 metrics. The former represents the area under the
_D_ _D[̃]_
ROC curve and the latter represents the value of FPR at 95% TPR. Table 1 presents the results.
{• Performance Analysis} : In the far-shift OOD setting, our average performance surpasses 98
**(Llama2-7B) and 96 (GPT2-XL) under the AUROC metric, surpassing the optimal baseline**
by 10+ points. Moreover, our performance stands at an impressive 5.21 (Llama2-7B) and 9.89
**(GPT2-XL) under the FPR95 metric, representing a remarkable 80%+ reduction compared**
to the optimal baseline, far surpassing all baseline methods. In the near-shift OOD setting, the
robustness of our method is even more impressive. All of the baseline methods show significant
performance degradation, especially in Llama2-7B, with the AUROC metric decreasing below 60
and the FPR95 elevating above 80. However, our method maintains excellent performances, with
AUROC scores surpassing 90 and FPR95 below 30. This indicates that for more fine-grained OOD
detection scenarios, our method demonstrates greater adaptability.
1We refer to the latest survey [23] to select baselines for the scarcity of OOD detection methods on GLMs.
-----
|Model|Llama2-7B [53]|GPT2-XL [6]|
|---|---|---|
|Metric Far-shift OOD Near-shift OOD Far-shift OOD Near-shift OOD Method AUROC ↑ FPR95 ↓ AUROC ↑ FPR95 ↓ AUROC ↑ FPR95 ↓ AUROC ↑ FPR95 ↓|Far-shift OOD Near-shift OOD||
||AUROC ↑ FPR95 ↓ AUROC ↑ FPR95 ↓||
|MS-Prob [14] MC-Drop [10] PPL [3] I-Emb [46] O-Emb [46]|78.66±1.38 81.44±3.56 60.14±1.54 88.91±2.41 68.63±2.21 87.04±4.88 52.33±2.21 91.92±1.89 85.64±1.46 53.06±4.36 59.35±1.89 86.09±1.89 75.89±1.03 67.87±3.69 60.33±1.37 84.65±2.53 74.86±1.39 75.21±2.16 44.50±1.06 86.46±1.59|70.54±1.42 78.29±2.02 67.12±1.20 76.27±2.66 66.18±1.87 84.69±1.65 63.54±1.72 78.08±2.50 80.82±1.04 64.53±2.10 73.74±1.12 72.39±1.27 86.26±0.84 49.33±2.10 83.22±0.88 52.90±3.16 77.95±1.16 65.64±3.42 79.28±1.24 64.70±2.72|
|TV score (Ours) w/ DiSmo (Ours)|98.76±0.11 5.21±0.98 92.64±0.39 28.39±1.38 93.25±0.76 41.82±4.69 56.99±1.41 88.01±1.71|93.47±0.08 24.10±0.95 94.86±0.23 13.82±0.36 96.54±0.11 9.89±0.61 94.19±0.25 13.66±0.69|
|∆(bold - underline)|+13.12 -47.85 +32.31 -56.26|+10.28 -39.44 +11.64 -39.24|
Table 1: OOD Detection — Offline Discrimination: AUROC and FPR95 results. Underline and
**bold denote SOTA among all baselines and all methods, respectively. We report the average results**
under each setting in the main text, results of each dataset are shown in Table 8 and 9 (Appendix E).
|Far-shift OOD Setting|Col2|Near-shift OOD Setting|Col4|
|---|---|---|---|
|Accuracy↑ Robustness↓ Accuracy↑ Robustness↓ Dataset Dataset I-Emb. / O-Emb. / TV (ours) I-Emb. / O-Emb. / TV (ours)||||
|Algebra Geometry Cnt.&Prob Num.Theory Precalculus|76.43 / 45.42 / 93.88 5.27 / 6.94 / 0.97 74.32 / 54.79 / 94.47 2.44 / 2.43 / 1.65 50.31 / 27.55 / 93.74 9.99 / 2.34 / 2.36 85.80 / 54.38 / 92.08 3.31 / 11.45 / 2.34 80.33 / 88.50 / 99.28 6.13 / 1.38 / 0.67|GSM8K SVAMP AddSub SingleEq SingleOp|81.49 / 75.32 / 93.39 10.08 / 3.36 / 2.05 68.66 / 63.33 / 94.88 5.26 / 3.54 / 2.13 79.16 / 78.09 / 74.11 3.21 / 6.98 / 2.77 59.83 / 72.56 / 93.15 11.57 / 3.14 / 3.17 69.38 / 62.20 / 95.75 4.00 / 2.37 / 2.45|
|Average|73.44 / 54.13 / 94.69 5.43 / 4.91 / 1.60|Average|71.70 / 70.30 / 90.26 6.82 / 3.88 / 2.51|
Table 2: OOD Detection — Online Discrimination: Accuracy and Robustness results. We mainly
compare our method with embedding-based methods, and bold denotes the best among these methods.
- Model Analysis: Comparing performances of Llama2-7B and GPT2-XL, we find two phenomena:
(i) Results on GPT2-XL are more stable, performance differences between GPT2-XL on farand near-drift settings are not significant, while Llama2-7B shows a significant performance
degradation (mainly for baselines) on near-shift setting; (ii) the DiSmo technique is more effective
on GPT2-XL, which suggests that there are more anomalous learning tendencies in latent spaces
of small models, and the smoothing helps to minimize these anomalies.
In addition, we conduct significant tests (Details are shown in Table 8 - 11). We find that our methods
almost pass all significance tests, while the embedding-based methods have the lowest pass rate
among baselines, suggesting that their results are more susceptible to sampling error.
**Online Discrimination.** In this part, we utilize the TV score for online OOD discrimination. For
each collection in, out, we obtain a detector and computer the optimal cut-off τi of Youden
_D_ _D[̃]_
Index, which is at the point in the AUROC curve where TPR − FPR is maximum. Then for all OOD
samples s ∈ out { − [̃]out, we donate} _t as the sampling size and computer the discrimination accuracy:_
_D_ _D_
_t_
Accuracy = [1] ∑s∈Dout−D̃out [I][ [][TV-Score][(][s][)][ ≥] _[τ][i][]]_ _._ (12)
_t_ ∑
_i=1_ _Dout[»]_
In addition, the discrimination accuracy should vary less under different data collections, reflectingthe discriminator’s robustness. Therefore, we denote the Robustness metric as sampling variance.»»»»»[D][out][ −] [̃] »»»»
Table 2 presents the results in Llama2-7B. Compared to the embedding-based methods, our TV
**score obtains about an average of 20-point accuracy improvement in both far-shift OOD and**
**near-shift OOD settings, and on some datasets, such as Cnt.&Prob, our TV score achieves more than**
40 points of improvement. These all imply that TV Score can perform online discrimination of OOD
samples more accurately. In addition, our TV score also possesses stronger robustness, which means
-----
|Model|Llama2-7B [53]|GPT2-XL [6]|
|---|---|---|
|Metric Far-shift OOD Near-shift OOD Far-shift OOD Near-shift OOD Method Kendall ↑ Spearman ↑ Kendall ↑ Spearman ↑ Kendall ↑ Spearman ↑ Kendall ↑ Spearman ↑|Far-shift OOD Near-shift OOD||
||Kendall ↑ Spearman ↑ Kendall ↑ Spearman ↑||
|MS-Prob PPL I-Emb O-Emb|0.024±0.020 0.038±0.020 0.038±0.018 0.026±0.018 0.050±0.015 0.045±0.016 0.074±0.017 0.050±0.018 0.078±0.016 0.102±0.017 0.036±0.018 0.115±0.017 0.058±0.018 0.025±0.017 0.038±0.015 0.012±0.017|0.066±0.015 0.044±0.016 0.057±0.018 0.057±0.022 0.036±0.014 0.038±0.017 0.035±0.018 0.058±0.019 0.059±0.012 0.098±0.016 0.012±0.018 0.068±0.016 0.050±0.012 0.016±0.017 0.036±0.017 0.029±0.021|
|TV score (Ours) w/ DiSmo (Ours)|0.161±0.012 0.147±0.015 0.159±0.017 0.158±0.017 0.111±0.016 0.152±0.015 0.113±0.018 0.134±0.017|0.138±0.010 0.123±0.013 0.131±0.015 0.146±0.015 0.139±0.009 0.141±0.014 0.123±0.014 0.154±0.016|
|∆(bold - underline)|+0.083 +0.050 +0.085 +0.043|+0.073 +0.043 +0.074 +0.086|
Table 3: OOD Quality Estimation: Kendall’s τ and Spearman correlation between various OOD
scores and benchmark quality metric binary matching. Each column shows the correlation when ID
and OOD samples are merged. Underline denotes the SOTA among all baselines, and bold denotes
the SOTA among our methods. We report the average results under each setting in the main text,
results of each dataset are shown in Table 10 and 11 (Appendix E).
that in real scenarios, we can find the optimal threshold more consistently in the face of different
accessible ID and OOD data, reducing the potential riskiness due to uncontrollable data acquisition.
**4.3** **Scenario II: OOD Quality Estimation**
In this part, we utilize the TV score for generative quality estimation (QE). For text generation, the
QE performance is usually measured by calculating the correlation coefficient between automatic
scores and human ratings. However, QE in mathematical reasoning scenarios is not a well-defined
problem. For one mathematical question, its answer is either right or wrong, the intermediate state
does not exist. For example, when the correct answer is 12.5, it is difficult to judge which is better
between generated answers of 1.25 and 13.6. The human approach may be to judge by comparing
the difference value or similarity like Rouge [28] and BertScore [60] between the generated answer
and the correct answer, which is unfair to the machine because there is a lot of randomness in the
intermediary process of computation, and the solution pattern of machines is case-based [16], so it is
not suitable to judge the machine-generated results with customized mathematical rules.
Therefore, we use the binary direct matching[2] to compare the model-generated answers with the
correct answers. Considering the open-ended output of the GLMs, we give a loose matching condition,
_i.e., as long as the correct answer is included in the generated answer by the model, the generated_
answer is recognized as correct and the matching score is 1, otherwise the matching score is 0. We
compute the Kendall rank correlation coefficient τ [48] and Spearman rank correlation coefficient
[40] between each OOD score and the matching score.
Table 3 presents the results. For Llama2-7B, when compared with Kendall correlation, the correlation
improvement of TV scores over SOTA baselines reaches up to 100% under both far-shift and
**near-shift OOD settings. Compared with Spearman correlation, TV scores demonstrate a correlation**
enhancement over SOTA baselines by up to 100% under far-shift OOD setting and 30% under
**near-shift OOD setting. GPT2-XL also demonstrates excellent performance. These findings indicate**
that our TV scores not only facilitate the binary discrimination of ID and OOD samples but also
substantially reflect the quality and precision of generated mathematical reasoning.
**4.4** **Beyond Mathematical Reasoning**
Apart from mathematical reasoning, our method also has a wider range of potential applications
that can be extended to any task where the output space exhibits the pattern collapse property.
An example would be multiple-choice questions, which is a popular evaluation tool in the era of
large language models and also display the pattern collapse property due to the limited output space
being confined to the “ABCD” four options. To verify the generalizability of our method, we conduct
experiments using the multiple-choice dataset MMLU [12], and our method also outperforms all
traditional algorithms in this setting. Results and analyses are shown in Appendix F.
2Although direct matching is the most accurate solution, it suffers from two issues: (i) Generated answers
may include much noisy content, increasing the matching difficulty; (ii) Performances on GLMs of mathematical
reasoning is poor, which unbalances the positive and negative samples and increases the randomness.
-----
**5** **Discussion**
**Hyperparameter: Smoothing Order k.** In the main experiments, we find that the performance of
differential smoothing fluctuates greatly in different settings. Therefore, we conduct the ablation of
the smoothing order k. Results and analyses are shown in Appendix G.1.
**Dilemmas of Embedding Representations in Reasoning.** cos 𝜃𝜃1 = cos < 𝐴𝐴, 𝑋𝑋> = 0.653
We explore why the input-embedding-based method is notpowerful under mathematical reasoning, as prior work [has shown it to be very effective in text generation scenarios.46] cos 𝜃𝜃cos 𝜃𝜃OODxˆ2*yˆ3*xˆ4=?23 = cos < 𝐴𝐴, 𝑌𝑌> = 0.847 = cos < 𝐴𝐴, 𝑍𝑍> = 0.783 (Far-domain Near-hop): 𝑌𝑌 Benchmark:2+3+4=?𝐴𝐴
𝐴𝐴
𝑌𝑌
Considering an intuitive explanation shown in Figure 4, we
construct toy illustration where we have a test sample “A ∶ 𝑋𝑋 ID:0-8+9=?
```
2+3+4=?”, a ID sample “X ∶ 0-8+9=?”, a far-domain OOD
𝑋𝑋
```
“sample “similarities ofZ ∶ `234*2+345*4+456*4-243*3=?Y ∶` `xˆ2*yˆ3*xˆ4=? X, Y, Z with A. We find that ID data receives”, and a far-digit OOD sample”. We computer cosine` 𝜃𝜃2𝜃𝜃 𝜃𝜃1 3 OOD234*2+345*4+456*4-243*3=? (Near-domain Far-hop):𝑍𝑍
𝑍𝑍
the lowest similarity scores, which reflect that as a semantic
|cos cos|𝜃1𝜃 = cos 𝜃2𝜃 = cos|< 𝐴𝐴, 𝑋𝑋 < 𝐴𝐴, 𝑌𝑌|> = 0.65 > = 0.84|3 7|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|cos|𝜃3𝜃 = cos|< 𝐴𝐴, 𝑍𝑍|> = 0.78|3||𝐴𝐴||
|||||||||
||OOD (Fa xˆ2*yˆ|r-domain N 3*xˆ4=?|ear-hop):|𝑌𝑌||Benc 2+3+|hmark: 4=?|
||ID:|||||||
|𝑋𝑋|0-8+9=|?||||||
||||||𝑍𝑍|||
|𝜃 𝜃2𝜃|1𝜃|||OOD (N 234*2+|ear-domai 345*4+|n Far-hop): 456*4-2|43*3=?|
||𝜃3𝜃|||||||
**representation, embedding cannot measure difficulty and** Figure 4: A toy embedding projec**digit accurately in mathematical reasoning scenarios.** tion. A is the benchmark, X is the ID
sample, Y and Z are OOD samples.
We also use powerful sentence embedding techniques like
SimCSE [11] to verify this conclusion. Results and analyses are shown in Appendix G.2.
**Can Chain-of-Thought Address Pattern Collapse?** An intuitive approach to addressing pattern
collapse in the output space is to use chain-of-thought (CoT) techniques [54, 20] to expand the output
space size. Therefore, we include the solution steps in the output to re-evaluate the output embedding
results and find that CoT cannot address this issue. Results and analyses are shown in Appendix G.3.
**6** **Related Work**
**Data-unavailable OOD Detection: Generic Methods.** In scenarios where OOD data is unavailable.
Detection methods are categorized into three main types: (1) Output-based methods assess confidence
using predicted probabilities. (2) Ensemble-based methods assess the uncertainty of a collection
of supporting models, classical techniques are Monte-Carlo dropout [49, 10] based on Bayesian
inference and deep ensemble [22, 35]; (3) Feature-based methods assess the Mahalanobis Distance
between OOD samples and the distribution of the ID data feature space, usually considering the
input and output spaces [26, 45, 51], occasionally extending to specific hidden layer spaces [1].
Besides, there are some fragmented methods, such as gradient methods [27, 17] and autoencoder
reconstruction methods [8, 18]. Still, these methods suffer from optimization and computational
complexity with serious performance bottlenecks [58] and thus are not mainstream detection methods.
**OOD Detection in GLMs.** Relatively few studies have explored OOD detection in GLMs, mainly
in semantic parsing [35, 29], speech recognition [37], machine translation [56, 36], summarization
[46], and they do not jump out of the frameworks of uncertainty estimation, ensemble-based methods,
and embedding-based methods. To our knowledge, we are the first to study OOD detection on
**mathematical reasoning, and we have found the failure of traditional algorithms in this scenario.**
Mathematical reasoning is an important and difficult research topic in the era of LLMs, and this
research is valuable for the scenario expansion of OOD detection algorithms on language models.
**7** **Conclusion**
We propose the TV score, a lightweight OOD detection method for mathematical reasoning, which
distinguishes between ID and OOD samples by the embedding trajectory volatility in the latent space.
We identify bottlenecks in OOD detection for mathematical reasoning and prove them empirically and
theoretically. Experiments show that our method substantially outperforms all traditional algorithms,
and can be extended to more application scenarios beyond mathematical reasoning.
-----
**Limitations**
In this part, we discuss the limitations of this paper and outline our efforts to make up for them, along
with potential avenues for future exploration.
- Dataset Size: Due to the difficulty of collecting and labeling mathematical reasoning data, dataset
sizes in this field are generally small, mostly in the hundreds or thousands, making it difficult
to obtain millions of data for training and reasoning as in translation or summarization tasks.
To address this, we adopt test sampling to reduce the randomness under small-scale testing and
mitigate the data imbalance.
- Generalizability: Our motivation and excavated phenomena are the mathematical reasoning
scenario, and thus our main experiments are mainly centered on mathematical reasoning. However,
the motivation for the method is primarily related to pattern collapse in the output space, and thus
our method can be theoretically extended to any task where the output space has this property,
such as multiple-choice questions. We have provided a preliminary demonstration of the extension
feasibility in Appendix F, and more exploration can be left to future work.
**Ethics Statement**
The data and models used in this work are sourced from the official version of the original paper, and
we strictly adhere to the provided usage protocol. Regarding the data, no modifications have been
made to the original dataset. Regarding the models, supervised fine-tuning and OOD data inference
are involved. To mitigate the risk of uncontrollable outputs, all generated outputs in the experiments
have been reviewed to ensure their safety. Furthermore, as our focus is solely on mathematical
reasoning and does not involve sensitive content, we would not cause any potential societal impact.
**References**
[1] Vahdat Abdelzad, Krzysztof Czarnecki, Rick Salay, Taylor Denounden, Sachin Vernekar, and
Buu Phan. Detecting out-of-distribution inputs in deep neural networks using an early-layer
output. arXiv preprint arXiv:1910.10307, 2019.
[2] Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni
Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al. Gpt-4
technical report. arXiv preprint arXiv:2303.08774, 2023.
[3] Udit Arora, William Huang, and He He. Types of out-of-distribution texts and how to detect
them. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language
_Processing, pages 10687–10701, 2021._
[4] Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov, and Dmitry Vetrov. Pitfalls of indomain uncertainty estimation and ensembling in deep learning. In International Conference
_on Learning Representations, 2019._
[5] Charles Blundell, Julien Cornebise, Koray Kavukcuoglu, and Daan Wierstra. Weight uncertainty
in neural network. In International conference on machine learning, pages 1613–1622. PMLR,
2015.
[6] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
[7] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
[8] Taylor Denouden, Rick Salay, Krzysztof Czarnecki, Vahdat Abdelzad, Buu Phan, and Sachin
Vernekar. Improving reconstruction autoencoder out-of-distribution detection with mahalanobis
distance. arXiv preprint arXiv:1812.02765, 2018.
[9] Geli Fei and Bing Liu. Breaking the closed world assumption in text classification. In
_Proceedings of the 2016 Conference of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Technologies, pages 506–514, 2016._
-----
[10] Yarin Gal and Zoubin Ghahramani. Dropout as a bayesian approximation: Representing model
uncertainty in deep learning. In international conference on machine learning, pages 1050–1059.
PMLR, 2016.
[11] Tianyu Gao, Xingcheng Yao, and Danqi Chen. Simcse: Simple contrastive learning of sentence
embeddings. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language
_Processing, pages 6894–6910, 2021._
[12] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and
Jacob Steinhardt. Measuring massive multitask language understanding. In International
_Conference on Learning Representations, 2020._
[13] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. In
_Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks_
_Track (Round 2), 2021._
[14] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution
examples in neural networks. In International Conference on Learning Representations, 2016.
[15] Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to
solve arithmetic word problems with verb categorization. In Proceedings of the 2014 Conference
_on Empirical Methods in Natural Language Processing (EMNLP), pages 523–533, 2014._
[16] Yi Hu, Xiaojuan Tang, Haotong Yang, and Muhan Zhang. Case-based or rule-based: How do
transformers do the math? arXiv preprint arXiv:2402.17709, 2024.
[17] Rui Huang, Andrew Geng, and Yixuan Li. On the importance of gradients for detecting
distributional shifts in the wild. Advances in Neural Information Processing Systems, 34:677–
689, 2021.
[18] Wenyu Jiang, Yuxin Ge, Hao Cheng, Mingcai Chen, Shuai Feng, and Chongjun Wang. Read:
Aggregating reconstruction error into out-of-distribution detection. In Proceedings of the AAAI
_Conference on Artificial Intelligence, volume 37, pages 14910–14918, 2023._
[19] Amita Kamath, Robin Jia, and Percy Liang. Selective question answering under domain shift.
In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics,
pages 5684–5696, 2020.
[20] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. Advances in neural information processing systems,
35:22199–22213, 2022.
[21] Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi.
Mawps: A math word problem repository. In Proceedings of the 2016 conference of the
_north american chapter of the association for computational linguistics: human language_
_technologies, pages 1152–1157, 2016._
[22] Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable
predictive uncertainty estimation using deep ensembles. Advances in neural information
_processing systems, 30, 2017._
[23] Hao Lang, Yinhe Zheng, Yixuan Li, SUN Jian, Fei Huang, and Yongbin Li. A survey on
out-of-distribution detection in nlp. Transactions on Machine Learning Research, 2023.
[24] Stefan Larson, Anish Mahendran, Joseph J Peper, Christopher Clarke, Andrew Lee, Parker
Hill, Jonathan K Kummerfeld, Kevin Leach, Michael A Laurenzano, Lingjia Tang, et al. An
evaluation dataset for intent classification and out-of-scope prediction. In Proceedings of
_the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th_
_International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages_
1311–1316, 2019.
-----
[25] Rikard Laxhammar and Göran Falkman. Online learning and sequential anomaly detection in
trajectories. IEEE transactions on pattern analysis and machine intelligence, 36(6):1158–1173,
2013.
[26] Kimin Lee, Kibok Lee, Honglak Lee, and Jinwoo Shin. A simple unified framework for
detecting out-of-distribution samples and adversarial attacks. Advances in neural information
_processing systems, 31, 2018._
[27] Shiyu Liang, Yixuan Li, and R Srikant. Enhancing the reliability of out-of-distribution image
detection in neural networks. In International Conference on Learning Representations, 2018.
[28] Chin-Yew Lin. Rouge: A package for automatic evaluation of summaries. In Text summarization
_branches out, pages 74–81, 2004._
[29] Zi Lin, Jeremiah Zhe Liu, and Jingbo Shang. Towards collaborative neural-symbolic graph
semantic parsing via uncertainty. Findings of the Association for Computational Linguistics:
_ACL 2022, 2022._
[30] Weitang Liu, Xiaoyun Wang, John Owens, and Yixuan Li. Energy-based out-of-distribution
detection. Advances in neural information processing systems, 33:21464–21475, 2020.
[31] Yang Liu, Yuanshun Yao, Jean-Francois Ton, Xiaoying Zhang, Ruocheng Guo, Hao Cheng,
Yegor Klochkov, Muhammad Faaiz Taufiq, and Hang Li. Trustworthy llms: a survey and
guideline for evaluating large language models’ alignment. In Socially Responsible Language
_Modelling Research, 2023._
[32] Yiding Liu, Kaiqi Zhao, Gao Cong, and Zhifeng Bao. Online anomalous trajectory detection
with deep generative sequence modeling. In 2020 IEEE 36th International Conference on Data
_Engineering (ICDE), pages 949–960. IEEE, 2020._
[33] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In International
_Conference on Learning Representations, 2018._
[34] Christos Louizos and Max Welling. Multiplicative normalizing flows for variational bayesian
neural networks. In International Conference on Machine Learning, pages 2218–2227. PMLR,
2017.
[35] Denis Lukovnikov, Sina Daubener, and Asja Fischer. Detecting compositionally out-ofdistribution examples in semantic parsing. In Findings of the Association for Computational
_Linguistics: EMNLP 2021, pages 591–598, 2021._
[36] Andrey Malinin, Neil Band, Yarin Gal, Mark Gales, Alexander Ganshin, German Chesnokov,
Alexey Noskov, Andrey Ploskonosov, Liudmila Prokhorenkova, Ivan Provilkov, et al. Shifts: A
dataset of real distributional shift across multiple large-scale tasks. In Thirty-fifth Conference
_on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2), 2021._
[37] Andrey Malinin and Mark Gales. Uncertainty estimation in autoregressive structured prediction.
In International Conference on Learning Representations, 2020.
[38] Leland McInnes, John Healy, Nathaniel Saul, and Lukas Großberger. Umap: Uniform manifold
approximation and projection. Journal of Open Source Software, 3(29):861, 2018.
[39] Charles E Metz. Basic principles of roc analysis. In Seminars in nuclear medicine, volume 8,
pages 283–298. Elsevier, 1978.
[40] Leann Myers and Maria J Sirois. Spearman correlation coefficients, differences between.
_Encyclopedia of statistical sciences, 12, 2004._
[41] Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua
Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model’s uncertainty?
evaluating predictive uncertainty under dataset shift. Advances in neural information processing
_systems, 32, 2019._
-----
[42] Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve
simple math word problems? In Proceedings of the 2021 Conference of the North American
_Chapter of the Association for Computational Linguistics: Human Language Technologies,_
pages 2080–2094, 2021.
[43] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language
understanding by generative pre-training. 2018.
[44] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.
Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
[45] Jie Ren, Stanislav Fort, Jeremiah Liu, Abhijit Guha Roy, Shreyas Padhy, and Balaji
Lakshminarayanan. A simple fix to mahalanobis distance for improving near-ood detection.
_arXiv preprint arXiv:2106.09022, 2021._
[46] Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan,
and Peter J Liu. Out-of-distribution detection and selective generation for conditional language
models. In The Eleventh International Conference on Learning Representations, 2023.
[47] Subhro Roy and Dan Roth. Solving general arithmetic word problems. In Proceedings of the
_Conference on Empirical Methods in Natural Language Processing, pages 1743–1752, 2015._
[48] Pranab Kumar Sen. Estimates of the regression coefficient based on kendall’s tau. Journal of
_the American statistical association, 63(324):1379–1389, 1968._
[49] Rupesh K Srivastava, Klaus Greff, and Jürgen Schmidhuber. Training very deep networks.
_Advances in neural information processing systems, 28, 2015._
[50] Lichao Sun, Yue Huang, Haoran Wang, Siyuan Wu, Qihui Zhang, Chujie Gao, Yixin Huang,
Wenhan Lyu, Yixuan Zhang, Xiner Li, et al. Trustllm: Trustworthiness in large language models.
_arXiv preprint arXiv:2401.05561, 2024._
[51] Yiyou Sun, Yifei Ming, Xiaojin Zhu, and Yixuan Li. Out-of-distribution detection with deep
nearest neighbors. In International Conference on Machine Learning, pages 20827–20840.
PMLR, 2022.
[52] Jörg Tiedemann. Parallel data, tools and interfaces in opus. In Proceedings of the Eighth
_International Conference on Language Resources and Evaluation, pages 2214–2218, 2012._
[53] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,
Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open
foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
[54] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le,
Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models.
_Advances in neural information processing systems, 35:24824–24837, 2022._
[55] Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, and Roger Grosse. Flipout: Efficient pseudoindependent weight perturbations on mini-batches. In International Conference on Learning
_Representations, 2018._
[56] Tim Z Xiao, Aidan Gomez, and Yarin Gal. Wat zei je? detecting out-of-distribution translations
with variational transformers. 2020.
[57] Ruijie Xu, Zengzhi Wang, Run-Ze Fan, and Pengfei Liu. Benchmarking benchmark leakage in
large language models. arXiv preprint arXiv:2404.18824, 2024.
[58] Jingkang Yang, Kaiyang Zhou, Yixuan Li, and Ziwei Liu. Generalized out-of-distribution
detection: A survey. arXiv preprint arXiv:2110.11334, 2021.
[59] Hugh Zhang, Jeff Da, Dean Lee, Vaughn Robinson, Catherine Wu, Will Song, Tiffany Zhao,
Pranav Raja, Dylan Slack, Qin Lyu, et al. A careful examination of large language model
performance on grade school arithmetic. arXiv preprint arXiv:2405.00332, 2024.
[60] Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. Bertscore:
Evaluating text generation with bert. In International Conference on Learning Representations,
2019.
-----
**A** **Motivation I: Theoretical Analysis**
In this section, we theoretically analyze Why pattern collapse in the output space leads to a greater
_likelihood of volatility differences in trajectories under different samples, which corresponds to_
Hypothesis 1 in the main paper.
**A.1** **Problem Setup**
**Latent Space Embedding.** For a given sample si, **_yi_** _l ∈_ R[d] 1 ≤ _l ≤_ _L_ denote the hidden
embedding of the l-th layer. We define the embedding coordinate in the latent space as _l,_ **_yi_** _l_ _l=1[.]_
[ ] ( )
**Embedding Interpolation.** We assume the F i _x_ = _Fi1_ _x_ _, Fi2_ _x_ _, ..., Fid_ _x {(_ [∶ R] →)}[L]R[d]
with d independent components fits the L coordinates, representing a continuous learning trajectory
for si, so **_yi_** _l = F i_ _l_ . F i _x_ is taken from the( ) _functional space [_ ( ) ( **_X)_**, so (F i)]l[⊤] _l=1_ [are all]
independent variables. We constrain each component function of this F i _x_ to satisfy the m-order
_m ≥_ 3 derivability property. Under this setting, the definition of component-independent trajectory [ ] ( ) ( ) { ( )}[L]
volatility (Eq. 3) equates to
( )
( **_V_** _si) =_ _V1_ _si_ _, V2_ _si_ _, ..., VL_ _si_
_L−1_ (13)
( ) = [[1] ( ) _Fi1(_ _l_ ) − _Fi1_ _l( −_ )]1 [⊤], _Fi2_ _l_ − _Fi2_ _l −_ 1 _, ...,_ _Fid_ _l_ − _Fid_ _l −_ 1
_L_ ∑
_l=1_
[∣ ( ) ( )∣ ∣ ( ) ( )∣ ∣ ( ) ( )∣][⊤]
**Modeling.** We observe the “pattern collapse” phenomenon in the output space in Figure 1. We
abstract this phenomenon in Figure 2, which compares the trajectory trend between different samples
in the mathematical reasoning and text generation scenarios.
We specify that **_yi_** _L = F i_ _L_ ∼ _N_ **_c, Σ[2]_**, where
**_c =_** _c1, c2, ..., cd_ _,_ Σ = diag _δ1, δ2, ⋯, δd_ _._ (14)
[ ] ( ) ( )
According to the pattern collapse property under different tasks, we can constraint the endpoint
embedding F i _L_ in the output space: [ ][⊤] ( )
- For mathematical reasoning with pattern collapse, Σ → **_O, so we approximate that F i_** _L_ ≡ **_c;_**
( )
- For text generation without pattern collapse, Σ ≠ **_O, so F i_** _L_ ≢ **_c._**
( )
With such constraints, we model the main theorem:
( )
**Theorem A.1 (Main Theorem) For different samples si and sj, the likelihood of variations in**
_trajectory volatility under mathematical reasoning scenarios is higher than that under text generation_
_scenarios, which means:_
E **_F i_** _l_ _l=1[,][ {][F][ j]_ [(][l][)}][L]l=1 [∼] _[U]_ [(][R][d][)][ {][V][ (][s][i][)][ −] **_[V][ (][s][j][)][ ≠]_** **[0][∣][F][ i][(][L][)][,][ F][ j][(][L][)][ ≡]** **_[c][}]_**
_where U>R E[d]_ {{F denotes the uniform distribution defined in i((l)})}l[L][L]=1[,][ {][F][ j] [(][l][)}]l[L]=1 [∼] _[U]_ [(][R][d][)][ {][V][ (][s][i][)][ −] **_[V][ (][s][j][)] d[ ≠]-dimensional real number space.[0][∣][F][ i][(][L][)][,][ F][ j][(][L][)][ ∼]_** _[N]_ [(][c][,][ Σ][2][)}][,]
**A.2** **Prelinimary(** )
Next, we move on to the formal proofs. We begin with some propositions and lemmas that will be
useful in the main theorem.
**Proposition A.1 (Lagrange Remainder Term) In the Taylor Expansion expression**
_f_ _x_ = f _a_ + [d]d[f]x [(][x][ −] _[a][)][ +][ 1]2!_ [⋅] [d]d[2]x[f][2][ (][x][ −] _[a][)][2][ + ⋯+][ 1]n!_ [⋅] [d]d[n]x[f][n][ (][x][ −] _[a][)][n][ +][ R][n][+][1][(][x][)][,]_
_the remainder Rn+1_ _x_ _has the following property:_
( ) ( )
1 _M_
_Rn+1_ _x(_ )= ≤
∣ ( )∣ »»»»»»»»» (n + 1)! [⋅] [d]d[n]c[+][n][1][+][f][1][ ⋅] [(][x][ −] _[a][)][n][+][1][»]»»»»»»»»_ (n + 1)! [⋅] [»]»»»»[(][x][ −] _[a][)][n][+][1]»»»[»]»_ _[,]_
-----
_where c ∈_ _a, x_ _or c ∈_ _x, a_ _, and M = sup_ dd[n]ξ[n][+][1][+]f[1]
[∶] _[ξ][ ∈]_ [[][a, x][]][ or][ ξ][ ∈] [[][x, a][]}][ >][ 0][.]
_Proof._ We consider the case of[ ] [ ] _a < x with a { >»»[»]»»»_ _x identically. We use the Fundamental Theorem of»»»»»»_
Calculus (FTC) for the most basic expansion of f _x_ :
_x_ df
_f_ _x_ = f _a_ +( ) (15)
∫a dx1 [d][x][1][.]
Continue to use the FTC to expand the derivatives in integrals:( ) ( )
_x_ df _x_ _x1_ d[2]f
_f_ _x_ = f _a_ + dx2 dx1
∫a dx1 [d][x][1][ =][ f] [(][a][)][ +][ ∫]a da [+][ ∫]a dx[2]2
( ) ( ) _x_ _x1_ d[2]f ( [d][f] )
= f _a_ + [d][f] dx2 dx1
dx [(][x][ −] _[a][)][ +][ ∫]a_ ∫a dx[2]2 (16)
= ⋯( )
_n_ 1 _x_ _x1_ _x2_ _xn_ d[n][+][1]f
= ⋯ dxn+1 dxn⋯ dx1.
_k∑=0_ _k!_ [⋅] [d]d[k]a[f][k][ ⋅] [(][x][ −] _[a][)][k][ +][ ∫]a_ ∫a ∫a ∫a dxn[n]+[+]1[1]
Therefore, the generalized remainder is known as
_x_ _x1_ _x2_ _xn_ d[n][+][1]f
_Rn+1_ _x_ = ⋯ dxn+1 dxn⋯ dx1
∫a ∫a ∫a ∫a dxn[n]+[+]1[1] (17)
( ) _x_ 1
=
∫a _n!_ [⋅] [d]d[n]t[+][n][1][+][f][1][ ⋅] [(][x][ −] _[t][)][n][ d][t.]_
We let mn+1 = mint∈ _a,x_ dd[n]t[n][+][1][+]f[1][ and][ M][n][+][1][ =][ max][t][∈][[][a,x][]] dd[n]t[n][+][1][+]f[1][, so]
[ _x]_ _x_ _x_
d[n][+][1]f
_mn+1_ _x −_ _t_ dt ≤ _x −_ _t_ dt
∫a ∫a dt[n][+][1][ ⋅] [(][x][ −] _[t][)][n][ d][t][ ≤]_ _[M][n][+][1][ ∫]a_
_x_ d[n][+][1]f (18)
∫(a dt[n][+])[1][n][ ⋅] [(][x][ −] _[t][)][n]_ ( )[n]
⟹ _mn+1 ≤_ _x−a_ ≤ _Mn+1._
_n+1_
( )[n][+][1]
According to the Lagrange’s Mean Value Theorem, there must exist a number c ∈ _a, x_ with
_x_ d[n][+][1]f
d[n][+][1]f ∫a dt[n][+][1][ ⋅] [(][x][ −] _[t][)][n]_ [ ]
_,_ (19)
dc[n][+][1][ =] _x−a_
_n+1_
( )[n][+][1]
this gives us
_x_ d[n][+][1]f
∫a dt[n][+][1][ ⋅] [(][x][ −] _[t][)][n][ =][ d]d[n]c[+][n][1][+][f][1][ ⋅]_ [(][x][ −]n +[a][)] 1[n][+][1] (20)
1
⟹ _Rn+1_ _x_ =
_n + 1_ ! [⋅] [d]d[n]c[+][n][1][+][f][1][ ⋅] [(][x][ −] _[a][)][n][+][1][,]_
and the equation on the left side of the proposition is proved completely. The inequality on the( )
right-hand side is clearly established by the Lagrange’s Mean Value Theorem.( ) □
**Lemma A.1 (Error Bound for the Midpoint Rule) Suppose that f** _x_ _is a m-th_ _m ≥_ 2 _order_
_differentiable function on the interval_ −∞, +∞ _, and K = sup_ dd[2]xf[2]
[∶] _[x][ ∈]_ [[][a, b][]}][ ∈] [R][, then]
( ) ( )
∫ab _f_ _x_ dx ( − _b −_ _a_ _f)_ 2 ≤ 24[K] {[(]»»[»]»»»[b][ −] _[a]»»»»»»[)][3]_
»»»»»»»» ( ) ( ) ( _[a][ +][ b]_ )»»»»»»»»
-----
_Proof._ We do the first order Taylor Expansion for f _x_ at the midpoint x = _[a][+]2_ _[b]_ of the interval
_a, b_ :
_b_ _b_
( )
_f_ _x_ dx − _b −_ _a_ _f_ = _f_ _x_ − _f_ dx
[ ] ∫a 2 ∫a 2
== »»»»»»»»»»»»»»»»»»»0∫ +ab ∫⎡⎢⎢⎢⎢⎢⎢⎢⎣(dab ()Rd[a]f[+]21 _[b]x[)]_ ⋅ d((xx −= _[a])∫[ +]2 (ab[ b][a]R)[ +] +1_ _[ b]x R)»»»»»»»» d1(xx)»»»»»»»»⎤⎥⎥⎥⎥⎥⎥⎥⎦_ dx[»»»»»»»»»»» ( ) ( _[a][ +][ b]_ )] »»»»»»»» (21)
Following Theorem A.1, we do the inequality scaling as below:b »»»»»»»» _b_ _K(_ ) »»»»»»»» »»»»»»»» 2 ( ) »»»»»»»» 3 3
_R1_ _x_ dx ≤ dx = _[K]_ − = _[K]_
∫a ∫a 2! 2 6 2 2 24 [(][b][ −] _[a][)][3]_
»»»»»»»» ( ) »»»»»»»» »»»»»»»»» [(][x][ −] _[a][ +][ b]_ ) »»»»»»»»» [[(] _[b][ −]_ _[a]_ ) ( _[a][ −]_ _[b]_ ) ] (22)□
**Lemma A.2 (Differential-Integral Error Order Estimation) Suppose that f** _x_ _is a m-th_
_m_ ≥ 3 _order_ _differentiable_ _function_ _on_ _the_ _interval_ −∞, +∞ _and_ _Ki_ =
sup d[i]f ( )
dx[i]
[∶] _[x][ ∈]_ [(][−∞][,][ +∞][)}][ <][ +∞] [(][i][ =][ 1][,][ 2][,][ ⋯][, m][)][, then on the closed interval][ [][a, b][]][, we]
(have ) ( )
{»»[»]»»» »»»»»» _L_ _b_ df
∶= _f_ _a +_ _[b][ −]_ _[a]_ _i_ − _f_ _a +_ _[b][ −]_ _[a]_ _i −_ 1 − dx ≤ _R_ _L_ ∼ _o_ [1]
_Proof.A_ For each sub-interval {»»»»»»»»»»∑i=1 »»»»»»»» ( _L_ ) _a + ([b][−]L[a]_ _L_ ( _L))»»»»»»»»[i][)][, we perform a Taylor Expansion at the]∫a_ »»»»»»» dx »»»»»»» »»»»»»»»»»} ( ) ( _L_ [)][.]
midpoint x = a + _[b][−]L[a]_ 2 [)][ to obtain the following expression:]
_L_ ( [(][i][ −] [1][)][, a][ +][ b][−][a]
_f_ _a +[(][b][i][ −][ −]_ _[a][1]i_ − _f_ _a +_ _[b][ −]_ _[a]_ _i −_ 1
∑ _L_ _L_
_i=1_
= _L_ »»»»»»» _f(_ _a +_ _[b][ −]_ _[a])_ _i −([1]_ ( d))f »»»»»»» ⋅ _[b][ −]_ _[a]_ + 2
Following Lemma A.1, we do the inequality scaling for the first-order derivative summation:=≤= _[b][b]∑∑ii−==L[ −][ −]LL11⎡⎢⎢⎢⎢⎢⎢⎢⎣»»»»»»»»»»»»»»»»»»»»»»[a][a]f⎡⎢⎢⎢⎢⎢⎢⎢⎣d ( (⋅⋅ (a∑∑iia==LL + +11_ »»»»»»»»»»»»»»»»»»»»»»[b]dd[b][ −][−]Ld ( (L[a]faaL[a][(] + +[i]([ −]i( −[b][b][−][−]LLdd2[1][a][a]ff[))][1]2[(][(][))][i][i]2[ −][ −]⋅[))][ +][b][ −][ +]2[1]2[1]L[))][))]d[a] (d»»»»»»»»»»»»»»»»»»»»»»a (+++ +a O O +∑i=L[b](21[−](dL[b]L[a]2OLf[1][1][−]L[2][(][a]([)][ )][i][(][ −]L»»»»»»»»»»»[i][1][ −][2][ )]2[1] [))][1]2 [))]⋅ _[a][ −]LL[b]_ + O O [( [([b][ −]2[b]L[ −]2[a]L[)][a]2[)]]⎤⎥⎥⎥⎥⎥⎥⎥⎦]»»»»»»»»»»»⎤⎥⎥⎥⎥⎥⎥⎥⎦ (23)
_b −_ _a_ _L_ df _L_ _a+_ _[b]L[−][a]_ _[i]_ df 3
⋅ ≤ + _[K][3]_ dx
= ∫Lab dd∑ifx=1 »»»»»»»»»»»+d ([K]a[3] +[(]24[b][ −]L[b][−]L[2][a][a][(][)][i][3][ −] d[1]2 [))]x =»»»»»»»»»»» ∫a∑ib=1dd∫fxa+ _[b]dL[−][a]x[(] +[i][−][1][K][)]_ [[3]»»»»»»»[(]24d[b][ −]xL»»»»»»»[2][a][)][4]24 [(] _[b][ −]L_ _[a]_ ) ] (24)
[»»»»»»» »»»»»»» ] »»»»»»» »»»»»»»
-----
∶≤ _[K][3][(][b][ −]_ _[a][)][4]_ + [1] (25)
_A_ 24L[2] _O_ _L_ [)][ =][ R][(][L][)][ ∼] _[o][(][ 1]L_ [)]
□
(
**A.3** **Main Theorem Proof**
Finally, we formally prove our main theorem A.1 in this sub-section, thereby verifying the Hypotheses
1 in the main paper. Since the d components of V _si_ − **_V_** _sj_ are independent of each other, we
only consider the d-th dimensional component and the rest of the components Vd _si_ − _Vd_ _sj_ can
be proved in the same way.
( ) ( )
_Proof._ According to Lemma A.2, we can approximate Vd _si_ − _Vd_ _sj_ as ( ) ( )
_L_ _L_
_Vd_ _si_ − _Vd_ _sj_ = _Fid_ _l_ − _Fid_ _l −_ 1 −( ) _Fjd_ (l −) _Fjd_ _l −_ 1
∑ ∑
_l=1_ _l=1_
( ) ( ) = _L∣_ dF(id) dx −( _L_ )∣dFjd d∣x +( ) [1] ( )∣ (26)
∫1 dx ∫1 dx _O_ _L_ [)]
≈ ∫1L »»»»»»» ddFxid »»»»»»» dx − ∫1L »»»»»»»» ddFxjd »»»»»»»» dx (
Now we want to remove the absolute value of the integrated function. Due to the continuity ofthe first-order derivative, there must be several zero points that are not extreme points, dividing»»»»»»» »»»»»»» »»»»»»»» »»»»»»»»
the domain of function definition into several open intervals with alternating constant positive and
negative function values. We first define such a set of zeros on the domain of definition D = 1, L :
_Xi =_ _x_ _x ∈_ _D; [d]d[F]x[id]_ = 0 = _i1, i2, ⋯, ip_ _,_ [ ]
(27)
_Xj = {x ∣_ _x ∈_ _D; [d]d[F]x[jd]_ = 0} = {j1, j2, ⋯, jq} _._
For all zero points it ∈ _Xi or jt { ∈_ ∣Xj, they must satisfy the following properties to ensure that they} { }
are not extreme points:
∃ϵ > 0, ∀∆x < ϵ, d _itd −Fid∆x_ [⋅] d _itd +Fid ∆x_ [<][ 0][,] (28)
and jt is the same as it. Therefore, in the sub-interval 1, i1 _,_ _i1, i2_ _, ⋯,_ _ip, L_, Fid _x_ is
alternately constant positive and constant negative, and the same as( ) ( _F)jd_ _x_ .
We assume that the first interval 1, i1 and 1, j1 are constant positive intervals (the same can be ( ) ( ) ( ) ( )
proven for constant negative intervals). In this setting, we can continue to simplify Eq.(26):( )
_L_ dFid dx − (L dF)jd d (x )
∫1 dx ∫1 dx
»»»» »»»» »»»»» »»»»»
= ∫1»i1 ddFx»id dx + ∫i»»i12 ddFx»»id dx + ⋯+ ∫iLp ddFxid dx
(− _j»»»»»»»1_ dFjd»»»»»»» dx + _j»»»»»»»2_ dF»»»»»»»jd dx + ⋯+ »»»»»»»L dF»»»»»»»jd d)x
∫1 dx ∫j1 dx ∫jq dx
= _F(id_ _x_ »»»»»»»» _i11_ [+][ (]»»»»»»»»[−][1][)][ ⋅] _[F]id[(][x]»»»»»»»»[)∣]ii21_ [+ ⋯+]»»»»»»»» [ (][−][1][)][i][p][ ⋅] _[F][id]»»»»»»»»_ [(][x][)∣]i[L]»»»»»»»»p []] )
[− _F(jd_ )∣x _j11_ [+][ (][−][1][)][ ⋅] _[F][jd][(][x][)∣]jj21_ [+ ⋯+][ (][−][1][)][j][q][ ⋅] _[F][jd][(][x][)∣]Ljq_ []]
_p_ _q_
= −[Fid (1 )∣ + Fjd 1 + 2 −1 ⋅ _Fid_ _ik_ − 2 −1 ⋅ _Fjd_ _jk_
∑ ∑
_k=1_ _k=1_
[+ −1( ) ⋅ _Fid(_ )]L − [ −1 ( )⋅[k]F[−]jd[1] _L_ ( ) ( )[k][−][1] ( )]
[( )[i][p][−][1] ( ) ( )[j][q][−][1] ( )]
(29)
-----
_p_
Since Fid _x_ and Fjd _x_ are taken from the functional space, _Fid_ 1 _,_ _Fid_ _ik_ _k=1[}][ and]_
_q_
_Fjd_ 1 _,_ _Fjd_ _jk_ _k=1[}][ can be seen as independent variables. We let]_
( ) ( ) { ( ) { ( )}
**_c =_** −1 ⋅ _Fid_ _L_ − −1 ⋅ _Fjd_ _L_ _,_ (30)
{ ( ) { ( )}
then we can rewrite Eq.(29) to the matrix form:
[(L dF)id[i][p][−][1]dx − ( _L)_ dF(jd )[j]d[q]x[−] =[1] **_Ax +(_** **_c)],_** (31)
∫1 dx ∫1 dx
where AA is the coefficient matrix, and = −1, 1, 2, −2, 2, −»»»»»»»2, ..., »»»»»»» x−1 is the unknown variable vector:⋅ 2,»»»»»»»» −2, 2,»»»»»»»» −2, 2, ..., −1 ⋅ 2 ∈ R[1][×][(][p][+][q][+][2][)]
_p dimensions_ _q dimensions_ (32)
[ ( )[p][−][1] ( )[q] ]
**_x =_** _Fid_ 1 _, FÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏjd_ 1 _, Fid_ _i1_ _, ..., Fid_ _ip_ _, FÍÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÑÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÒÏjd_ _j1_ _, ..., Fjd_ _jq_ ∈ R[(][p][+][q][+][2][)][×][1]
We let ip ≡ _j [q_ mod 2( ), and return to the condition in the mathematical expectation on both sides of( ) ( ) ( ) ( ) ( )][⊤]
the inequality in the main theorem A.1 for the following categorical discussion:
( )
- If Fid _L_ = Fjd _L_ = cd, then c = 0, so Vd _si_ − _Vd_ _sj_ = 0 ⟹ **_Ax = 0;_**
- If Fid(L), Fjd _L(_ ∼) _N_ _cd, δd[2][)][, then][ c][ ≢]_ **[0][, so](** _[ V])_ _d[(][s]i[)]([ −]_ )[V]d[(][s]j[)][ ≠] [0][ ⟹] **_[Ax][ =][ −][c][.]_**
We denote N _A_ as the solution space of **_x_** **_Ax = 0_** and P _A_ as the solution space of **_x_** **_Ax =_**
( ) ( ) (
−c . For x ∈ _P_ _A_, x = xp or x = xp + xn where xp is the special solution and xn is the solution
in zero space,( i.e.), xn ∈ _N_ _A_ . Since rank { ∣A = 1 <} _p + q +(_ 2), the special solution xp { exists, so∣
the size of the solution space P _A_ is larger than N _A_ . In this case, when the variable x is sampled
} ( )
from the real number space, its probability of being in N _A_ is smaller, namely its probability of not
( ) ( )
being in N _A_ is larger. This is exactly equivalent to the form of the proof of Main Theorem A.1, so
( ) ( )
the proof is complete. □
( )
( )
**A.4** **Extended Conclusion**
During the proof of the main theorem, we can unlock a hidden conclusion due to the embedding
interpolation: GLMs with a larger number of hidden layers may achieve more stable detection
_performance, i.e., discrepancies in embedding volatility of different samples will be more obvious._
In Lemma A.2, we have proved the upper bound of differential-integral error order estimation is the
equivalent infinitesimals of 1 _L. Actually, when L →_ +∞, we have
_L_ _b_ df
lim _f_ /a + _[b][ −]_ _[a]_ _i_ − _f_ _a +_ _[b][ −]_ _[a]_ _i −_ 1 = dx, (33)
_L→+∞_ ∑i=1 _L_ _L_ ∫a dx
which is clear according to the definition of Riemann Sum. This means that when the value ofincreases, the differential-integral approximation error will be reduced, so the conclusion of the main»»»»»»»» ( ) ( ( ))»»»»»»»» »»»»»»» »»»»»»» _L_
theorem will be more accurate. In our experiments, we use GPT2-XL (48 layers) and Llama2-7B
(32 layers) as training backbones and find that the average performance of GPT2-XL is higher and
more stable than that of Llama2-7B, while the number of layers of GPT2-XL is 1.5 times that of
Llama2-7B. This further validates the correctness of our extended conclusions.
**B** **Motivation II: Empirical Analysis**
**B.1** **Visualization Projection Setting**
We use UMAP [38] for projection visualization, which is a nonlinear dimensionality reduction
technique for mapping high-dimensional data into low-dimensional spaces. It uses optimization of
the local and global structure of the original data to produce a high-quality mapping that can preserve
the original data’s local and global characteristics.
The UMAP algorithm consists of two steps: (i) Calculate the local similarity between each data point
and its immediate neighbors to construct a local similarity map. (ii) Map the high-dimensional data
to a low-dimensional space by optimizing an objective function that maintains the local and global
structure. The hyperparameter “n_neighbors” in the first step is key, we set it as 10 in our paper.
-----
**B.2** **Visualization Example**
We select the MATH [7] dataset for mathematical reasoning and the OPUS [52] for text generation
(translation). In the MATH, we choose four domains: algebra, geometry, number theory, and
precalculus; In the OPUS, we also choose four domains: Bible, news, TED, and health. We present
examples of inputs and outputs under different domains for mathematical reasoning and translation
scenarios in Table 4 and Table 5, respectively. They correspond to the case projection of Figure 1.
Obviously, under mathematical reasoning, the outputs under different domains may appear exactly
the same, while it is impossible under translation.
**Input** **Output**
_Domain 1: Algebra_
Suppose that f is a function and f [−][1] is the inverse of f . If f 1 = 2, f 2 = 6, and 1
_f_ 3 = 5, then what is f [−][1] _f_ [−][1] 6 ?
( ) ( )
_Domain 2: Geometry_
( ) ( ( ))
Two chords, AB and CD, meet inside a circle at P. If AP = CP = 7, then what is _DP[BP]_ [?] 1
_Domain 3: Number Theory_
The product of the proper positive integer factors of n can be written as n[(][ax][+][b][)/][c], where 1
_x is the number of positive divisors n has, c is a positive integer, and the greatest common_
factor of the three integers a, b, and c is 1. What is a + b + c?
_Domain 4: Precalculus_
Simplify 1
1
1 − tan[2] _x_ [+]
1 − cot[2] _x_ _[.]_
Table 4: Examples of inputs/outputs from different domains in the mathematical reasoning scenario.
**Input** **Output**
_Domain 1: Bible_
The earth yielded grass, herbs yielding seed after their
kind, and trees bearing fruit, with its seed in it, after
their kind; and God saw that it was good.
_Domain 2: News_
Supporters say the tunnels would benefit the
environment and offer Californians a more secure
water supply.
_Domain 3: TED_
```
于是地发生了青草,和结种子的菜蔬,各从其
类;并结果子的树木,各从其类;果子都包着
核。神看着是好的。
支持者们表示,这两条隧道将使环境受益并帮助
加利福尼亚州确保供水更加安全。
```
(Applause) June Cohen: Frank, that was beautiful, so `(鼓掌)` `主持:Frank 刚刚真是太美丽、太感人`
touching. `啦!`
_Domain 4: Health_
The White House Coronavirus Task Force was `白宫冠状病毒工作组(White House Coronavirus`
established on January 29. Task Force)是成立于2020年1月29日的以应对美
```
国COVID-19疫情的工作组。
```
Table 5: Examples of inputs/outputs from different domains in the translation scenario.
**C** **Algorithm: Pseudo-Code of TV Score Computation**
The pseudo-code of our TV score computation pipeline is shown in Algorithm 1.
-----
**Algorithm 1 Trajectory Volatility (TV) Score Computation**
**Input: L: The number of hidden layers**
_N_ : The size of ID dataset
_k: the smoothing differential order of TV score_
**_yl_** 1≤l≤L: the average hidden embedding of the OOD sample in each layer
ˆyl _i_ 1≤l≤L,1≤i≤N : the average hidden embedding of all N ID samples in each layer
1: for l ← 1 to L do
{ }
2: {[ Fitting ID samples] } ˆyl _i_ 1≤l≤L,1≤i≤N to Gaussian distribution Gl = N **_µl, Σl_**
3: _f_ **_yl_** ← **_yl −_** **_µl_** Σl **_yl −_** **_µl_**
4: end for {[ ] } ( )
5: for i → 1 to k do
( ) ( )[⊤] ( )[−][1] ( )
6: **for l ←** 1 to L do
_i_ _i_
7: ∆[(][i][)]Gl = N **_µl_** _[,][ Σ]l_ _t=0[(][−][1][)][i][+][t][C]i[t][µ]l+i[,][ ∑][i]t=0_ [C]i[t][Σ]l+i[)]
8: **end for** ( ) ( )
_i_ _i_ ⊤ _i_ −1 _i_ _i_
9: _f_ **_yl_** ← **_y(l_** − **_µl_** [)][ ←] Σ[N]l[(][∑][i] **_yl_** − **_µl_**
10: end for[(][i][)] ( ) ( ) ( ) ( ) ( )
11: for i →( 1 to) _n do(_ ) ( [)] ( )
12: TVi ← `average` _f_ **_yl_** 1≤l≤L
13: end for
[(][i][)]
**Output:** TVi 1≤i≤k [ ( )]
{ }
**D** **Experimental Setting Details**
**D.1** **Basic Information of Dataset**
For the ID dataset, we use the MultiArith [47], which consists of Math Word Problems on arithmetic
reasoning. For the OOD datasets, we intuitively introduce two types of detection scenarios following
[46]: (i) Far-shift OOD setting, we select the MATH [13] as the OOD data, which spans across
distinct mathematical domains encompassing algebra, geometry, counting and probability, number
theory, and precalculus. It contains college difficulty level math problems, whereas MultiArith
has only elementary school difficulty, and thus can be considered as sourced from far-different
distributions; (ii) Near-shift OOD setting, we select five arithmetic reasoning datasets as the OOD
data: GSM8K [7], SVAMP [42], AddSub [15], SingleEq [21], and SingleOp [21], they all consist of
Math Word Problems like the MultiArith but require different reasoning hops and knowledge points
for solving the problems, and thus can be considered as sourced from near-different distributions. In
addition, we present the data sizes and examples of all ID and OOD datasets, as shown in Table 6.
**D.2** **ID Dataset Split**
For the ID dataset MultiArith, we find that every 100 consecutive samples show a similar quadratic
operation pattern (e.g., samples with id 0-100 are a mixture of addition and subtraction, and id
100-200 are a mixture of addition and multiplication). Therefore, we divide it into 6 subsets (6*100)
in order. In each subset, we take the first 60 as training samples and the last 40 as test samples.
**D.3** **Training Implementation**
We train Llama2-7B [53] and GPT2-XL [6] models on the training split of MultiArith. Llama2-7B is
trained with AdamW optimizer [33] for 10K steps and 8 batch sizes in 4-card RTX 3090 (2 per card).
The learning rate is set to 1e-5, the warmup step to 10, and the maximum gradient normalization to 0.3.
GPT2-XL is trained for 3K steps and 128 batch sizes in a single RTX 3090, and other configurations
are the same as Llama2-7B.
**D.4** **OOD Dataset Selection Rationality**
In this part, we examine the rationality of the OOD data selection, ensuring that the OOD data
distribution significantly differs from the pre-trained data distribution and has not been fully learned
-----
**In-Distribution Dataset**
**MultiArith (Data Size: 600)**
Q: Kaleb was collecting cans for recycling. On Saturday he filled 5 bags up and on Sunday he filled 5 more
bags. If each bag had 4 cans in it, how many cans did he pick up total?
A: 40.0
**Far Shift Out-of-Distribution Dataset** **Near Shift Out-of-Distribution Dataset**
**MATH-Algebra (Data Size: 1187)**
Q: How many real numbers are not in the domain of
the function
**GSM8K (Data Size: 1318)**
Q: Judy teaches 5 dance classes, every day, on the
weekdays and 8 classes on Saturday. If each class has
15 students and she charges $15.00 per student, how
much money does she make in 1 week?
A: 7425
**SVAMP (Data Size: 1000)**
Q: A mailman has to give 38 pieces of junk mail to
each of the 78 blocks. If there are 19 houses on a
block. How many pieces of junk mail should he give
each house?
A: 2.0
**AddSub (Data Size: 395)**
Q: While taking inventory at her pastry shop, Kelly
realizes that she had 0.4 box of baking powder
yesterday, but the supply is now down to 0.3 box. How
much more baking powder did Kelly have yesterday?
A: 0.1
**SingleEq (Data Size: 508)**
Q: Fred had 7 dimes in his bank. His sister borrowed
3 of his dimes. How many dimes does Fred have now?
A: 4.0
**SingleOp (Data Size: 562)**
Q: Pamela starts with 30 bottle caps. Jean takes 26
away. How many bottle caps does Pamela end with?
A: 4.0
_f_ _x_ =
( )
1
_x −_ 64 [+]
1
_x[2]_ − 64 [+]
_x[3]_ − 64 [?]
A: 4
**MATH-Geometry (Data Size: 479)**
Q: Suppose we are given seven points that are equally
spaced around a circle. If P, Q, and R are chosen to
be any three of these points, then how many different
possible values are there for m∠PQR?
A: 5
**MATH-Counting and Probability (Data Size: 474)**
Q: Amy’s grandmother gave her 3 identical chocolate
chip cookies and 4 identical sugar cookies. In how
many different orders can Amy eat the cookies such
that either she eats a chocolate chip cookie first, she
eats a chocolate chip cookie last, or both?
A: 25
**MATH-Number Theory (Data Size: 540)**
Q: Notice that
31 ⋅ 37 = 1147.
Find some integer n with 0 ≤ _n < 2293 such that_
31n ≡ 3 mod 2293 _._
A: 222
( )
**MATH-Precalculus (Data Size: 546)**
−11
1 +
4
−5
Q: Let a =
_, b =_
_, and c =_
Find k if the vectors a + b + c and
⎝ ⎠ ⎝ ⎠ ⎝
3 **b × c** − 8 **c × a** + k **a × b**
are orthogonal.
( ) ( ) ( )
A: 5
Table 6: ID and OOD datasets used in this paper.
-----
|Far-shift OOD Setting|Near-shift OOD Setting|
|---|---|
|Algebra Geometry Cnt.&Prob Num.Theory Precalculus|6 / 1187 0 / 1187 2 / 479 0 / 479 4 / 474 0 / 474 0 / 540 0 / 540 1 / 546 0 / 540|GSM8K SVAMP AddSub SingleEq SingleOp|0 / 1318 0 / 1318 8 / 1000 0 / 1000 13 / 395 0 / 1000 5 / 395 0 / 508 7 / 508 0 / 508|
|---|---|---|---|
**Llama2-7B** **GPT2-XL** **Llama2-7B** **GPT2-XL**
_Dataset_ _Dataset_
Accuracy of pre-trained model Accuracy of pre-trained model
Table 7: Accuracies of all datasets we select as the OOD data in pre-trained GLMs.
during the pre-training phase. Some research [57, 59] have confirmed the absence of data leakage in
Llama2 for MATH and GSM8K datasets, we still conduct experiments and analyses to ensure this.
For GPT2-XL, We can determine this from the time dimension. GPT-2 was released in 2019, but the
MATH and GSM8k datasets were released in 2021, so they are unlikely to appear in the pre-training
data. However, for Llama2-7B, due to the closed-source data, we cannot confirm which data the
model used in the pre-training phase, so we cannot fully determine whether the selected dataset was
OOD for the model from a data perspective.
Therefore, we argue that a dataset can be considered OOD when it is beyond the capability of the base
model as claimed by prior work [50, 31]. We test all ten datasets we select as the OOD data in the
pre-trained Llama2-7B and GPT2-XL model, and Table 7 shows the results. We find that GPT2-XL
cannot handle any of the ten mathematical reasoning tasks, and Llama2-7B performs with very low
accuracy. Therefore, from a capability standpoint, we can ensure that these datasets are OOD for
these two GLMs.
**D.5** **Baseline**
Let x represent the input sequence and y the output sequence, with lengths denoted as nx and ny
respectively. In addition, we assume that p ⋅; θ represents the GLM parameterized by θ that has
been trained in ID dataset D, outputting a sequence of softmax probabilities. We compare some
training-free baseline methods as below:
( )
- Maximum Softmax Probability [14]:
_ny_
max _[p][(][y][i][∣][y][≺][i][,][ x][;][ θ][)]_ _._ (34)
∑ _ny_
_i=1_
- Monte-Carlo Dropout [10]:
_p_ **_y_** **_x; θ_** _q_ **_θ_** dθ, (35)
∫
where q **_θ_** ≈ _p_ **_θ_** _D_ . ( ∣ ) ( )
- Sequence Perplexity [46]:
( ) ( ∣ ) _ny_ − _N[1]_
_p_ _yi_ _y≺i, x; θ_ _._ (36)
∏
_i=1_
- Input Embedding [46]: [ ( ∣ )]
**_x −_** **_µx_** **Σ[−]x[1]** _x[)][,]_ (37)
where µx and Σx represent the mean and variance of the Gaussian distribution associated with the
first-layer hidden state. ( )[⊤] [(][x][ −] **_[µ]_**
- Output Embedding [46]:
**_y −_** **_µy_** **Σ[−]y** [1] _y[)][,]_ (38)
where µy and Σy represent the mean and variance of the Gaussian distribution associated with the
final-layer hidden state. ( )[⊤] [(][y][ −] **_[µ]_**
-----
**E** **Full Main Experimental Results**
**OOD Detection.** AUROC and FPR95 results of each dataset are shown in Table 8 (Llama2-7B)
and Table 9 (GPT2-XL).
**OOD Quality Estimation.** Kendall’s τ correlation results of each dataset are shown in Table 10
(Llama2-7B) and Table 11 (GPT2-XL).
|Col1|Far-shift OOD Setting|Col3|
|---|---|---|
|Dataset Algebra Geometry Counting and Probability Number Theory Precalculus Average Method AUROC ↑/ FPR95 ↓|Algebra Geometry Counting and Probability Number Theory Precalculus||
|MS-Prob [14] MC-Drop [10] PPL [3] I-Emb [46] O-Emb [46]|79.97±1.60 / 80.97±4.90 82.60±1.25 / 82.57±4.62 63.87±1.64 / 96.19±1.57 76.20±1.46 / 85.20±2.69 90.68±0.94 / 62.27±4.03 72.12±1.68 / 85.43±6.60 74.23±1.74 / 81.17±6.28 55.64±2.87 / 97.15±1.34 67.53±2.08 / 89.09±5.41 73.61±2.66 / 82.36±4.78 84.17±1.43 / 52.75±5.13 88.67±1.36 / 53.32±5.38 77.42±1.97 / 71.28±3.90 83.88±2.13 / 68.90±4.22 94.05±0.43 / 19.03±3.17 81.51±1.09 / 62.84±4.56 75.41±0.97 / 69.66±2.07 62.53±1.30 / 88.35±1.72 84.42±0.85 / 54.80±7.25 75.59±0.92 / 63.71±2.87 76.95±1.54 / 74.92±3.02 78.72±1.31 / 80.97±2.23 61.43±1.40 / 88.45±1.58 70.23±1.36 / 80.18±1.71 86.97±1.32 / 51.51±2.27|78.66±1.38 / 81.44±3.56 68.63±2.21 / 87.04±4.88 85.64±1.46 / 53.06±4.36 75.89±1.03 / 67.87±3.69 74.86±1.39 / 75.21±2.16|
|TV score (Ours) w/ DiSmo (Ours)|98.87±0.16 / 4.67±1.41 99.03±0.09 / 3.70±0.42 97.70±0.15 / 8.83±1.33 98.43±0.13 / 7.37±1.46 99.78±0.02 / 1.47±0.20 94.71±0.93 / 39.65±7.28 94.08±0.80 / 50.52±6.14 83.08±1.28 / 80.07±1.57 94.57±0.75 / 37.74±7.58 99.82±0.02 / 1.11±0.19|98.76±0.11 / 5.21±0.98 93.25±0.76 / 41.82±4.69|
|∆(bold - underline)|+14.70 / -48.08 +10.36 / -49.62 +20.28 / -62.45 +14.01 / -47.43 +5.77 / -17.92|+13.12 / -47.85|
|Near-shift OOD Setting|||
|Dataset GSM8K SVAMP AddSub SingleEq SingleOp Average Method AUROC ↑/ FPR95 ↓|||
|MS-Prob [14] MC-Drop [10] PPL [3] I-Emb [46] O-Emb [46]|53.08±1.67 / 94.07±1.86 56.56±1.53 / 90.06±2.39 63.31±1.88 / 87.36±2.42 66.68±1.32 / 86.15±2.61 61.07±1.30 / 86.91±2.76 48.87±2.43 / 96.78±1.21 44.90±2.76 / 92.33±2.09 57.34±1.57 / 89.15±2.34 54.09±2.42 / 90.07±1.96 56.46±1.85 / 91.21±1.83 52.24±2.57 / 95.56±1.21 55.12±1.80 / 89.24±2.09 62.88±1.76 / 80.96±2.34 67.14±1.34 / 81.38±1.96 59.39±1.98 / 83.30±1.83 45.68±1.50 / 95.05±1.17 60.92±1.34 / 86.97±4.13 66.28±0.92 / 76.09±2.93 61.18±1.22 / 87.36±1.89 67.60±1.20 / 77.78±2.53 35.39±1.24 / 91.22±1.27 36.77±1.14 / 90.41±1.61 63.08±0.92 / 77.12±3.11 43.70±1.02 / 86.69±0.91 43.55±0.99 / 86.87±1.04|60.14±1.54 / 88.91±2.41 52.33±2.21 / 91.92±1.89 59.35±1.89 / 86.09±1.89 60.33±1.37 / 84.65±2.53 44.50±1.06 / 86.46±1.59|
|TV score (Ours) 94.88±0.25 / 14.22±1.64 94.51±0.20 / 12.89±1.11 85.84±1.06 / 82.63±2.22 93.97±0.24 / 17.39±1.09 94.00±0.20 / 14.61±0.85 92.64±0.39 / 28.39±1.38 w/ DiSmo (Ours) 55.21±1.91 / 95.71±0.88 38.24±1.53 / 87.06±0.94 87.06±0.94 / 71.02±4.33 56.66±1.34 / 93.76±1.28 47.46±1.32 / 92.48±1.13 56.99±1.41 / 88.01±1.71|||
|∆(bold - underline)|+41.80 / -77.00 +33.59 / -74.08 +20.78 / -5.07 +26.83 / -63.99 +26.40 / -63.17|+32.31 / -56.26|
Table 8: OOD Detection — Offline Discrimination (Llama2-7B): AUROC and FPR95 results
(p-value > 0.05 are grayed out). Underline denote SOTA among all baselines and bold denote SOTA
among all methods.
|Col1|Far-shift OOD Setting|Col3|
|---|---|---|
|Dataset Algebra Geometry Counting and Probability Number Theory Precalculus Average Method AUROC ↑/ FPR95 ↓|Algebra Geometry Counting and Probability Number Theory Precalculus||
|MS-Prob MC-Drop PPL I-Emb O-Emb|72.13±1.42 / 78.35±2.35 64.09±1.78 / 87.42±1.24 68.45±1.56 / 82.35±1.57 69.37±1.43 / 82.09±1.90 78.67±0.89 / 61.23±3.02 62.41±2.10 / 88.02±1.79 66.63±1.82 / 82.49±1.12 59.32±2.03 / 92.34±1.54 63.88±1.92 / 86.71±1.95 70.19±1.47 / 73.90±1.87 84.24±1.01 / 63.78±2.38 82.17±0.89 / 67.90±2.45 79.12±1.20 / 72.03±1.23 72.40±1.37 / 71.65±1.45 86.19±0.75 / 47.27±2.98 86.23±0.78 / 46.12±1.45 83.20±0.94 / 53.29±2.06 79.58±1.43 / 60.45±2.95 89.44±0.64 / 52.49±1.87 92.86±0.39 / 34.32±2.16 78.13±1.14 / 64.46±2.78 77.98±1.17 / 68.82±3.26 71.07±1.67 / 84.63±4.24 80.25±0.97 / 56.23±4.05 82.34±0.86 / 54.08±2.79|70.54±1.42 / 78.29±2.02 66.18±1.87 / 84.69±1.65 80.82±1.04 / 64.53±2.10 86.26±0.84 / 49.33±2.10 77.95±1.16 / 65.64±3.42|
|TV score (Ours) w/ DiSmo (Ours)|98.35±0.07 / 6.24±0.24 97.27±0.11 / 9.23±0.35 85.52±0.12 / 52.51±1.78 92.96±0.06 / 29.86±1.26 93.24±0.04 / 22.68±1.14 93.68±0.19 / 23.24±1.45 94.39±0.15 / 12.17±0.79 95.83±0.13 / 9.86±0.46 99.17±0.04 / 2.42±0.22 99.62±0.02 / 1.75±0.13|93.47±0.08 / 24.10±0.95 96.54±0.11 / 9.89±0.61|
|∆(bold - underline)|+12.12 / -39.88 +14.07 / -44.06 +16.25 / -50.59 +9.27 / -50.07 +6.24 / -32.57|+10.28 / -39.44|
|Near-shift OOD Setting|||
|Dataset GSM8K SVAMP AddSub SingleEq SingleOp Average Method AUROC ↑/ FPR95 ↓|||
|MS-Prob MC-Drop PPL I-Emb O-Emb|54.06±1.58 / 92.45±2.37 65.50±1.43 / 73.62±4.18 70.60±0.92 / 77.56±2.76 79.96±0.87 / 57.38±1.95 65.47±1.18 / 80.37±2.02 63.44±2.08 / 79.85±3.35 62.13±1.65 / 77.28±3.21 67.97±1.62 / 71.42±2.88 71.29±1.43 / 69.22±2.41 52.89±1.80 / 92.78±0.67 72.89±1.56 / 74.63±1.52 70.79±1.38 / 79.65±1.21 67.50±1.04 / 87.76±0.92 87.14±0.57 / 43.09±1.23 70.40±1.06 / 76.80±1.45 87.58±1.14 / 48.45±2.06 84.81±1.20 / 62.75±5.68 80.34±0.64 / 46.62±2.17 81.44±0.49 / 49.05±3.24 81.91±0.93 / 57.65±2.67 82.05±1.36 / 60.44±3.96 81.42±2.02 / 68.43±4.38 72.98±0.85 / 66.72±1.77 80.00±1.02 / 70.23±1.87 79.94±0.94 / 57.68±1.60|67.12±1.20 / 76.27±2.66 63.54±1.72 / 78.08±2.50 73.74±1.12 / 72.39±1.27 83.22±0.88 / 52.90±3.16 79.28±1.24 / 64.70±2.72|
|TV score (Ours) w/ DiSmo (Ours)|98.26±0.06 / 1.78±0.04 98.99±0.02 / 1.13±0.03 80.56±0.82 / 52.23±1.29 97.35±0.23 / 13.91±0.45 99.12±0.02 / 0.07±0.01 97.62±0.21 / 3.58±0.16 92.19±0.44 / 13.07±1.68 83.35±0.57 / 41.70±1.29 97.83±0.03 / 9.94±0.32 99.94±0.02 / 0.03±0.01|94.86±0.23 / 13.82±0.36 94.19±0.25 / 13.66±0.69|
|∆(bold - underline)|+10.68 / -46.67 +14.18 / -61.62 +3.01 / -4.92 +10.69 / -33.15 +18.03 / -57.62|+11.64 / -39.24|
Table 9: OOD Detection — Offline Discrimination (GPT2-XL): AUROC and FPR95 results
(p-value > 0.05 are grayed out). Underline denote SOTA among all baselines and bold denote SOTA
among all methods.
-----
|Dataset Method|ID + Far-shift OOD|Col3|ID + Near-shift OOD|Col5|
|---|---|---|---|---|
||Algebra Geometry Cnt.&Prob. Num.Theory Precalculus|Average|GSM8K SVAMP AddSub SingleEq SingleOp|Average|
**Kendall Rank Correlation Coefficient**
|MS-Prob PPL I-Emb O-Emb|0.035±0.025 0.024±0.020 0.084±0.022 -0.043±0.021 0.021±0.014 -0.027±0.020 0.071±0.013 0.112±0.015 0.035±0.016 0.036±0.011 0.074±0.020 0.119±0.015 0.054±0.018 0.111±0.016 0.034±0.013 0.050±0.023 0.078±0.019 0.064±0.020 0.065±0.020 0.034±0.009|0.024±0.020 0.050±0.015 0.078±0.016 0.058±0.018|0.064±0.022 0.067±0.015 0.076±0.016 0.057±0.019 -0.075±0.017 0.073±0.019 0.027±0.017 0.082±0.015 0.091±0.019 0.079±0.017 0.042±0.020 -0.025±0.016 0.052±0.014 0.055±0.020 0.058±0.018 0.042±0.018 -0.001±0.013 0.056±0.015 0.032±0.015 0.061±0.015|0.038±0.018 0.074±0.017 0.036±0.018 0.038±0.015|
|---|---|---|---|---|
|TV score (Ours) w/ DiSmo (Ours)|0.182±0.016 0.116±0.011 0.191±0.011 0.174±0.014 0.142±0.009 0.095±0.020 0.059±0.014 0.123±0.020 0.131±0.016 0.148±0.009|0.161±0.012 0.111±0.016|0.146±0.010 0.209±0.019 0.195±0.018 0.145±0.017 0.101±0.020 0.178±0.025 0.113±0.015 0.121±0.016 0.079±0.017 0.076±0.019|0.159±0.017 0.113±0.018|
|∆(bold - underline)|+0.021 -0.003 +0.079 +0.109 +0.112|+0.083|+0.073 +0.142 +0.039 +0.055 +0.022|+0.085|
**Spearman Rank Correlation Coefficient**
|MS-Prob PPL I-Emb O-Emb|0.027±0.021 0.057±0.019 0.039±0.021 0.067±0.021 0.004±0.017 0.039±0.014 0.018±0.018 0.073±0.017 0.051±0.015 0.045±0.015 0.150±0.021 0.136±0.018 0.061±0.015 0.038±0.016 0.127±0.016 0.002±0.018 0.001±0.020 0.015±0.015 0.033±0.016 0.012±0.018|0.038±0.020 0.045±0.016 0.102±0.017 0.025±0.017|0.071±0.021 -0.086±0.018 0.063±0.019 0.081±0.015 0.002±0.019 0.086±0.019 0.090±0.019 0.016±0.019 0.039±0.015 0.017±0.016 0.042±0.022 0.128±0.016 0.154±0.016 0.110±0.017 0.140±0.015 -0.040±0.017 -0.019±0.016 0.045±0.019 0.101±0.018 -0.027±0.017|0.026±0.018 0.050±0.018 0.115±0.017 0.012±0.017|
|---|---|---|---|---|
|TV score (Ours) w/ DiSmo (Ours)|0.122±0.012 0.169±0.015 0.102±0.016 0.126±0.016 0.216±0.014 0.139±0.014 0.148±0.013 0.176±0.017 0.178±0.017 0.121±0.016|0.147±0.015 0.152±0.015|0.124±0.016 0.188±0.017 0.127±0.017 0.165±0.017 0.188±0.019 0.071±0.018 0.150±0.020 0.167±0.014 0.131±0.017 0.153±0.017|0.158±0.017 0.134±0.017|
|∆(bold - underline)|-0.011 +0.033 +0.103 +0.127 +0.089|+0.050|+0.038 +0.060 +0.013 +0.055 +0.048|+0.043|
Table 10: OOD Quality Estimation (Llama2-7B): Kendall’s τ and Spearman correlation between
various OOD scores and benchmark quality metric binary matching. Each column shows the
correlation when ID and OOD samples are merged. Underline denotes the SOTA among all baselines,
and bold denotes the SOTA among our methods.
|Dataset Method|ID + Far-shift OOD|Col3|ID + Near-shift OOD|Col5|
|---|---|---|---|---|
||Algebra Geometry Cnt.&Prob. Num.Theory Precalculus|Average|GSM8K SVAMP AddSub SingleEq SingleOp|Average|
**Kendall Rank Correlation Coefficient**
|MS-Prob PPL I-Emb O-Emb|0.078±0.018 0.032±0.015 0.093±0.014 0.059±0.017 0.068±0.012 0.023±0.014 0.086±0.015 0.043±0.016 -0.009±0.015 0.036±0.012 0.084±0.012 0.052±0.011 0.089±0.013 0.062±0.011 0.007±0.010 0.089±0.012 0.023±0.012 0.058±0.013 0.014±0.012 0.067±0.011|0.066±0.015 0.036±0.014 0.059±0.012 0.050±0.012|0.032±0.024 0.061±0.020 0.088±0.018 0.023±0.017 0.081±0.019 0.068±0.022 0.008±0.022 0.035±0.018 0.001±0.015 0.064±0.015 0.041±0.019 0.010±0.020 -0.008±0.016 0.019±0.015 0.000±0.018 0.068±0.018 0.036±0.020 0.033±0.018 0.022±0.014 0.021±0.017|0.057±0.018 0.035±0.018 0.012±0.018 0.036±0.017|
|---|---|---|---|---|
|TV score (Ours) w/ DiSmo (Ours)|0.127±0.010 0.142±0.010 0.158±0.009 0.098±0.011 0.167±0.009 0.134±0.010 0.112±0.009 0.132±0.010 0.161±0.009 0.158±0.009|0.138±0.010 0.139±0.009|0.087±0.013 0.166±0.019 0.110±0.015 0.152±0.013 0.141±0.016 0.102±0.011 0.099±0.021 0.094±0.015 0.126±0.013 0.193±0.012|0.131±0.015 0.123±0.014|
|∆(bold - underline)|+0.045 +0.056 +0.065 +0.099 +0.099|+0.080|+0.034 +0.105 +0.022 +0.129 +0.112|+0.074|
**Spearman Rank Correlation Coefficient**
|MS-Prob PPL I-Emb O-Emb|0.001±0.015 0.035±0.017 0.054±0.016 0.076±0.016 0.056±0.018 0.056±0.018 0.076±0.016 0.024±0.016 0.035±0.017 -0.002±0.016 0.141±0.016 0.135±0.016 0.087±0.014 0.097±0.017 0.029±0.016 0.012±0.018 -0.098±0.016 0.076±0.016 0.054±0.017 0.035±0.017|0.044±0.016 0.038±0.017 0.098±0.016 0.016±0.017|0.032±0.023 0.066±0.024 0.031±0.021 0.067±0.021 0.087±0.020 0.023±0.021 0.094±0.022 0.065±0.016 0.088±0.019 0.019±0.019 0.043±0.018 0.084±0.020 0.098±0.016 0.024±0.016 0.090±0.017 0.010±0.019 0.059±0.025 -0.077±0.019 0.084±0.019 0.067±0.023|0.057±0.022 0.058±0.019 0.068±0.016 0.029±0.021|
|---|---|---|---|---|
|TV score (Ours) w/ DiSmo (Ours)|0.165±0.013 0.110±0.012 0.141±0.012 0.086±0.015 0.115±0.014 0.173±0.011 0.208±0.010 0.072±0.015 0.145±0.019 0.109±0.016|0.123±0.013 0.141±0.014|0.112±0.012 0.164±0.012 0.125±0.018 0.168±0.018 0.161±0.014 0.134±0.015 0.127±0.016 0.189±0.017 0.167±0.017 0.155±0.016|0.146±0.015 0.154±0.016|
|∆(bold - underline)|+0.032 +0.073 +0.054 +0.048 +0.059|+0.043|+0.091 +0.070 +0.091 +0.080 +0.071|+0.086|
Table 11: OOD Quality Estimation (GPT2-XL): Kendall’s τ and Spearman correlation between
various OOD scores and benchmark quality metric binary matching. Each column shows the
correlation when ID and OOD samples are merged. Underline denotes the SOTA among all baselines,
and bold denotes the SOTA among our methods.
**F** **Beyond Mathematical Reasoning**
Our method has a wider range of application scenarios beyond mathematical reasoning. To verify
generalizability, we choose the multiple-choice quizzing task, which has the same “pattern collapse”
property as mathematical reasoning, since the output space is limited to the “ABCD” four options.
We select the MMLU dataset [12] and choose eight domains among it: high school mathematics,
high school physics, high school chemistry, high school biology, high school geography, high school
government and politics, high school psychology, high school statistics. We test eight rounds, each
using one of the domains as the ID dataset and the remaining seven domains as the OOD datasets.
We use Llama2-7B as the training backbone, each model is trained for 3K steps and 8 batch sizes in
4-card NVIDIA Tesla V100 (2 per card). The AUROC score matrices are shown in Figure 5(a)-(e),
presenting the results for TV score, input embedding, output embedding, and perplexity, respectively.
In the figures, the Roman numeral I - VIII are represented as:
- I = high school mathematics
-----
- II = high school physics
- III = high school chemistry
- IV = high school biology
- V = high school geography
- VI = high school government and politics
- VII = high school psychology
- VIII = high school statistics
We find that MS-Prob and PPL nearly fail on the multiple-choice task and the output embedding is
not as excellent as expected, which is caused by the pattern collapse phenomenon.
Our method is comparable to the input embedding method and has very good absolute performances.
For some far-shift OOD scenarios, e.g., mathematics-psychology (I-VII), physics-politics (II-VI),
etc., performances of the input embedding method and our method are basically the same, and there
exists a reasonable range of competing phenomena, e.g., our method performs more advantageously
under physics-biology (II-IV), and the input embedding method is better under physics-geography
(II-V). For some near-shift OOD scenarios, e.g., mathematics-statistics (I-VIII), where both domains
essentially belong to the category of math, our method will be more well-performed, indicating that
the embedding-based method produces performance degradation in fine-grained scenarios, while our
method possesses stronger robustness.
Overall, our method is scalable and has greater advantages in fine-grained detection scenarios.
**G** **Discussion: Extended Analysis**
**G.1** **Hyperparameter Analysis: Smoothing Order k**
In the main experiments, we found that differential smoothing is not as effective as the basic TV
score, with excellent results occurring on only a few datasets. Figure 6 visualizes the results for the
OOD detection scenario with no smoothing (k = 0) and smoothing order k 1-5. We find that there
is a significant effect of differential smoothing in two cases: (1) very good performance without
smoothing, e.g., the precalculus dataset, where the AUROC results are almost close to 100%; and (2)
significantly poor performance without smoothing, e.g., the AddSub dataset, where the FPR95 results
on all other near-shift OOD datasets are 20 below, the results on this dataset are more than 80, when
differential smoothing helps to eliminate some of the noise features and helps better detection.
In addition, for the case of k > 0, the peak performance basically occurs in the case of k = 1 or 2,
which indicates that when k is too large, the phenomenon of over-smoothing tends to occur. Too
much useful feature information is lost, leading to a decrease in detection accuracy.
**G.2** **Dilemmas of Embedding Representations in Reasoning.**
|OOD Domain (a) Input Embedding MaDis (b) Output Embedding (w/ CoT) MaDis algebra geometry cnt.&prob num.theory precalculus algebra geometry cnt.&prob num.theory precalculus algebra - 63.77 80.01 58.79 50.80 - 71.37 84.24 65.06 56.71 Domain geometry 85.68 - 88.14 86.55 69.03 72.12 - 92.42 86.08 70.89 cnt.&prob 49.95 44.94 - 38.02 51.76 46.19 53.61 - 40.15 51.63 num.theory 66.35 78.87 50.00 - 71.93 63.02 78.76 69.85 - 67.96 ID precalculus 85.14 79.31 86.18 89.34 - 79.84 85.65 86.77 90.35 -|Col2|
|---|---|
||(a) Input Embedding MaDis|
||algebra geometry cnt.&prob num.theory precalculus|
|algebra Domain geometry cnt.&prob num.theory ID precalculus|- 63.77 80.01 58.79 50.80 85.68 - 88.14 86.55 69.03 49.95 44.94 - 38.02 51.76 66.35 78.87 50.00 - 71.93 85.14 79.31 86.18 89.34 -|
Table 12: AUROC score matrix produced after alternating the MATH dataset’s five domains as ID and
OOD data measured by (a) Input Embedding Mahalanobis Distance and (b) Output Embedding
**(w/ CoT) Mahalanobis Distance. Darker colors represent better performances.**
Prior work [46] has demonstrated that traditional training-free algorithms are ineffective in text
generation scenarios apart from embedding-based methods. Nevertheless, mathematical reasoning
-----
(a) Maximum Probability (b) Sequence Perplexity
(c) Input Embedding (d) Output Embedding
(e) TV score (Ours)
Figure 5: AUROC score matrix in MMLU dataset of different OOD scores. Rows represent ID data,
and columns represent OOD data.
-----
|Col1|Algebra Cnt.&Prob Precalculus Geometry Num.Theory|
|---|---|
|||
|||
|||
|||
|GSM8K AddSub SingleOp SVAMP SingleEq|Col2|
|---|---|
|||
|||
|||
|||
Algebra Cnt.&Prob Precalculus GSM8K AddSub SingleOp
Geometry Num.Theory SVAMP SingleEq
100 100
80 80
AUROC 6040 6040
20 20
0 0
0 1 2 3 4 5 0 1 2 3 4 5
100 100
80 80
FPR95 60 60
40 40
20 20
0 0
0 1 2 3 4 5 0 1 2 3 4 5
Smoothing Order k
Figure 6: Smoothing order k analysis: k ranges from 0 − 5 (TV-MaDis for k = 0). The upper part
is for the OOD detection scenario and the lower part is for the OOD quality estimation scenario; the
left part is for the far-shift OOD datasets and the right part is for the near-shift OOD datasets.
renders embedding-based methods ineffective, as verified in our experiments. We now explore
embedding-based methods in reasoning scenarios to understand their failure causes better.
We select SimCSE [11] as a powerful sentence embedding technique to represent input samples, and
let the five domains in the MATH dataset alternately serve as ID and OOD data for detection. Table
12(a) presents the AUROC score matrix. We find that the accuracy of different ID-OOD pairs varies
largely. For example, the accuracy is generally high when geometric or calculus domains are used as
the ID data, but the accuracy is generally randomized when Cnt.&Prob is used as the ID data. This
phenomenon can illustrate that as a semantic representation, embedding is currently unable to
**measure difficulty and digit accurately in mathematical reasoning scenarios.**
**G.3** **Can Chain-of-Thought Address Pattern Collapse?**
A straightforward approach to addressing pattern collapse in the output space is to leverage chainof-thought (CoT) techniques [54] to expand the output space size. Likewise, we adopt the solution
steps associated with each sample in the MATH dataset as the output and employ SimCSE to
derive embedding representations. The experimental setup aligns with Sec.4.1, and the results
are shown in Table 12(b). We note a similar phenomenon as in Table 12(a), i.e., the detection
accuracy under different ID-OOD pairs varies greatly, and thus the detection randomness is more
pronounced. This suggests that despite that CoT expands the output space size, the output answer
is still essentially related to the difficulty and digit of mathematical reasoning, and the semantic
embedding representation cannot reflect these features accurately.
-----
| [
"Zhuosheng, Zhang",
"Pei, Zhang",
"Rui, Wang",
"Baosong, Yang",
"Yiming, Wang",
"Derek F., Wong"
] | 2024-05-22T00:00:00 | NeurIPS 2024 | true | 1 | 0 | null | http://arxiv.org/abs/2405.14039 | https://arxiv.org/abs/2405.14039 | https://www.semanticscholar.org/paper/1eb7973680688611d76fb8c94c27cdfc0c949d63 |
Weak-to-Strong Reasoning | When large language models (LLMs) exceed human-level capabilities, it becomes increasingly challenging to provide full-scale and accurate supervision for these models. Weak-to-strong learning, which leverages a less capable model to unlock the latent abilities of a stronger model, proves valuable in this context. Yet, the efficacy of this approach for complex reasoning tasks is still untested. Furthermore, tackling reasoning tasks under the weak-to-strong setting currently lacks efficient methods to avoid blindly imitating the weak supervisor including its errors. In this paper, we introduce a progressive learning framework that enables the strong model to autonomously refine its training data, without requiring input from either a more advanced model or human-annotated data. This framework begins with supervised fine-tuning on a selective small but high-quality dataset, followed by preference optimization on contrastive samples identified by the strong model itself. Extensive experiments on the GSM8K and MATH datasets demonstrate that our method significantly enhances the reasoning capabilities of Llama2-70b using three separate weak models. This method is further validated in a forward-looking experimental setup, where Llama3-8b-instruct effectively supervises Llama3-70b on the highly challenging OlympicArena dataset. This work paves the way for a more scalable and sophisticated strategy to enhance AI reasoning powers. All relevant code and resources are available in \url{https://github.com/GAIR-NLP/weak-to-strong-reasoning}. | A progressive learning framework that enables the strong model to autonomously refine its training data, without requiring input from either a more advanced model or human-annotated data is introduced. | ## Weak-to-Strong Reasoning
**Yuqing Yang[2,4]** **Yan Ma[2,3,4]** **Pengfei Liu[1,3,4*]**
1Shanghai Jiao Tong University 2Fudan University
3Shanghai AI Laboratory 4Generative AI Research Lab (GAIR)
_{yuqingyang21, yanma23}@m.fudan.edu.cn_ [email protected]
**Abstract**
When large language models (LLMs) exceed human-level capabilities, it becomes increasingly challenging
to provide full-scale and accurate supervisions for these models. Weak-to-strong learning, which leverages
a less capable model to unlock the latent abilities of a stronger model, proves valuable in this context. Yet,
the efficacy of this approach for complex reasoning tasks is still untested. Furthermore, tackling reasoning
tasks under the weak-to-strong setting currently lacks efficient methods to avoid blindly imitating the weak
supervisor including its errors. In this paper, we introduce a progressive learning framework that enables
**the strong model to autonomously refine its training data, without requiring input from either a more**
**advanced model or human-annotated data. This framework begins with supervised fine-tuning on a se-**
lective small but high-quality dataset, followed by preference optimization on contrastive samples identified
by the strong model itself. Extensive experiments on the GSM8K and MATH datasets demonstrate that our
method significantly enhances the reasoning capabilities of Llama2-70b using three separate weak models.
This method is further validated in a forward-looking experimental setup, where Llama3-8b-instruct ef**fectively supervises Llama3-70b on the highly challenging OlympicArena dataset. This work paves the**
way for a more scalable and sophisticated strategy to enhance AI reasoning powers. All relevant code and
[resources are available in https://github.com/GAIR-NLP/weak-to-strong-reasoning.](https://github.com/GAIR-NLP/weak-to-strong-reasoning)
16
15
14
13
12
11
10
70
60
50
40
30
20
: Weak Model
: Strong Model
33.81
: Weak Model
: Strong Model
15.65
13.10
12.46
11.28
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|68.16|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
|42.38||||||||
|||||||||
|33.81||||||||
|||||||||
|W L|eak Floo lama2- on|r Fu 7b GSM|ll Weak super 8K (Co|FT O vises bbe|ur Stage Llama et al.,|I O 2-70 2021|ur Stage b ).|
Figure 1: (a): Test accuracy on GSM8K using Llama2-7b to supervise Llama2-70b. (b): Test accuracy on
OlympicArena using Llama3-8b-instruct to supervise Llama3-70b. “Weak Floor” refers to the results of the weak
model. “Full Weak FT” refers to the results of the baseline where the strong model is naively fine-tuned on the
full dataset generated by the weak model. “Our Stage I” represents the results from the first stage of supervised
fine-tuning using our proposed weak-to-strong method. Note that our method in Stage I produces three variants of
enhanced strong models and we present the best results here. “Our Stage II” denotes the results from the second
stage of preference optimization using our method.
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|15.65|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|13.10||||||||
|12.46 11.28||||||||
|||||||||
|||||||||
|||||||||
|W a o 0 he se at O|eak Floo ma3-8b on Oly super b. “We re the nts the our m ur Sta|r Fu -inst mpic vise ak stro res etho ge I|ll Weak ruct Arena Llam Floor” ng mo ults fr d in S I” den|FT O supe (Hua a2-7 refe del om tage otes|ur Stage rvises ng et a 0b. ( rs to t is naiv the fir I prod the re|I O Llam l., 20 b): he re ely st st uce sults|ur Stage a3-70b 24). Test a sults fine-tu age of s three from|
1 Corresponding Author.
-----
**1** **Introduction**
“A student need not be inferior to the teacher; a teacher need not be wiser than the student.”
— On Teachers
As the pursuit of Artificial General Intelligence
(AGI) advances, the creation of superintelligent sys- **Reasoning Problem:Joy can read 8 pages of a book in**
tems—models that exceed human cognitive capabili- 20 minutes. How many hours will
ties—remains a key ambition within the field (Robert, it take her to read 120 pages?
2017; Altman et al., 2023; Puthumanaillam et al., **Solution:** **Solution:**
2024). This quest introduces a host of challenges, Joy can read 8/20 = 0.4 pages in a minute. Joy can read 8/20 = 0.4 pages in a minute.
especially concerning the supervision and learning To read 120 pages, it will take her 120/0.4 =300 minutes = 5 hours. To read 120 pages, it will take her 120*0.4 =48 minutes = 0.8 hours.
_paradigms for these advanced AI models. Conven-_
tional supervision methods, which typically depend
on human oversight (Christiano et al., 2017; Ouyang **I think you're right.I need to learn from** **I think you're wrong.I won't let myself make**
et al., 2022; Sun et al., 2024) or guidance (i.e., dis- your reasoning process. such a mistake.
tilled knowledge) from more advanced models (Bai
et al., 2022; Lee et al., 2023; Peng et al., 2023), be- Evolution of Strong Model
come inadequate as the capabilities of AI exceed those
of their supervisors (Bowman et al., 2022; Sang et al.,
2024). To address this issue, we focus on the weak-to_strong learning paradigm (Burns et al., 2023), which_
operates under a unique task setting where only a less
capable model and a stronger[1] but not fully utilized
**Reasoning Problem:**
Joy can read 8 pages of a book in
20 minutes. How many hours will
it take her to read 120 pages?
**Solution:** **Solution:**
Joy can read 8/20 = 0.4 pages in a minute. Joy can read 8/20 = 0.4 pages in a minute.
To read 120 pages, it will take her 120/0.4 = To read 120 pages, it will take her 120*0.4 =
300 minutes = 5 hours. 48 minutes = 0.8 hours.
**I think you're right.** **I think you're wrong.**
I need to learn from I won't let myself make
your reasoning process. such a mistake.
Evolution of Strong Model
model are available. Figure 2: Illustration of weak-to-strong reasoning through the
The central question of weak-to-strong learning is strong model self-refining its training data.
whether models with limited capabilities can effectively guide the development of more advanced, stronger models. Previous studies by Burns et al. (2023) have
demonstrated the feasibility of it in classification, chess, and reward modeling tasks. However, the applicability of
this setup to more complex reasoning tasks, which demand more than mere extrapolation or pattern recognition,
remains an open question. Complex reasoning represents a key aspect of human cognition, crucial for assessing
whether LLMs can emulate or surpass human-like capabilities in comprehending the world, making decisions, and
solving problems (Qiao et al., 2023; Huang and Chang, 2023; Chang et al., 2023). Given the complexity and the
critical nature of these tasks, applying the weak-to-strong learning framework to advanced reasoning challenges is
essential, particularly within the broader context of achieving superintelligence.
Although Burns et al. (2023) suggest that naively fine-tuning strong models on the full set of noisy data
produced by weak models, named full weak fine-tuning, can consistently improve their performance over the weaker
counterparts, this approach is still far from recovering the full capabilities of strong models, and our experiments
show that it loses effectiveness when facing more complex reasoning challenges. They also propose an auxiliary
confidence loss to mitigate the issue of strong models imitating the errors of their supervisors. However, this
method is tailored to classification tasks with a set of fixed labels and does not naturally extend to open-ended
generation tasks including reasoning. Currently, there is a lack of effective methods beyond naive fine-tuning to
prevent the overfit of weak errors and to further elicit the intrinsic reasoning abilities of strong models within the
**weak-to-strong reasoning framework.**
To achieve the above goal, we introduce a progressive refinement learning framework, guided by the principle
that a model can enhance its capabilities more effectively by initially focusing on smaller, more reliable subsets of
data, and then iteratively expanding its learning scope, as illustrated in Fig. 2. In the first stage, we hypothesize
that it is more advantageous to utilize smaller quantities of data that are likely to be more accurate. We achieve
this by combining weak data, generated by the less capable model, with data self-generated by the more advanced
model through in-context learning. This blend is then used to selectively curate datasets for subsequent supervised
fine-tuning. In the second stage, upon having developed a strong model with improved reasoning capabilities, we
utilize its ability to construct contrastive samples for preference optimization (Rafailov et al., 2023; Hong et al.,
2024) and enables the model to learn effectively from the errors of the weaker model.
In implementation, we employ Llama2-70b (Touvron et al., 2023) as the strong model, test three separate weak
models: Llama2-7b, Gemma-2b (Mesnard et al., 2024), and Mistral-7b (Jiang et al., 2023), and conduct experiments
1Similar to Burns et al. (2023), we define “strong model” in the context of LLMs, taking into account their characteristics—that
is, LLMs often contain the knowledge and capabilities needed to perform specific tasks, but these have not yet been fully
elicited (Zhou et al., 2024). Typically, it refers to stronger and larger pre-trained language models whose capabilities have not
been fully realized yet.
-----
on the commonly used math reasoning datasets GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021).
Experimental results reveal that:
1. Full weak fine-tuning, while effective in classification tasks, falls short for complex reasoning tasks.
2. Our proposed method significantly outperforms full weak fine-tuning method, achieving a 26.99-point improvement on GSM8K when supervised solely by the weak model (i.e., Gemma-2b) after the first stage of
training (M →Mplus), and further enhances performance by an additional 8.49 points through preference
optimization without knowing the gold answer (Mplus →Mpro).
3. Our proposed preference optimization phase enables the strong model to learn from errors made by the weak
supervisor, ultimately surpassing the strong model fine-tuned on gold-standard solutions (i.e., strong
ceiling) in challenging scenarios, such as level 4-5 MATH problems.
To more accurately approximate future scenarios, we conduct additionally experiments on OlympicArena
**(Huang et al., 2024), an extremely challenging dataset with no definitive ground truth answers. Llama3-8b-**
instruct (AI@Meta, 2024), despite its smaller size, has been aligned and proved to effectively supervise the larger
Llama3-70b, whose potential have not yet been fully realized. Moreover, our proposed two-stage training approach
outperforms full weak fine-tuning by 3.19 points.
**2** **Preliminaries**
**2.1** **Typical Learning Paradigms for LLMs**
We outline common learning paradigms in large
model training, primarily characterized by the need
for ground truth answers and supervision from
stronger models as shown in Tab. 1.
**G.T. Answer** **Stronger Model**
for ground truth answers and supervision from Generic-supervised ✔ **–**
stronger models as shown in Tab. 1. Distillation-based ✘ ✔
Self-improvement ✔ **–**
**Generic-Supervised Learning** When training LLMs, Semi-supervised ✔ **–**
Weak-to-strong ✘ ✘
it is ideal to have a sufficient amount of training
data with ground truth answers, which we refer to as
Table 1: Typical Learning Paradigms for LLMs. “✔” and “✘”
_generic-supervised learning paradigm (Ouyang et al.,_
indicate whether supervision is required, and “–” indicates it is
2022; Yuan et al., 2023). However, acquiring such
optional. “G.T.” represents Ground Truth.
data is often label-intensive and can sometimes be
impossible. As a result, various learning paradigms
have emerged to reduce the effects of data quality and quantity while still improving performance.
**Distillation-based Learning** In the current context, to enhance a strong model like Llama2-70b, improvements
can still be made by seeking help to a stronger model like GPT-4 (OpenAI, 2023), even without ground truth. Hence,
many existing works suggest that a stronger model acts as a teacher model to provide specific feedback to improve
the targeted model (Lee et al., 2023; Peng et al., 2023; An et al., 2023; Agarwal et al., 2023; Chen et al., 2023). This
paradigm can be viewed as distilling the stronger teacher model’s knowledge. Nonetheless, merely imitating the
teacher model is not a long-term solution; imitation models only slightly close the performance gap to the teacher
model on tasks not well-represented in the imitation data (Gudibande et al., 2023). Furthermore, distillation learning
primarily benefits models that are less capable than the teacher model.
**Self-Improvement Learning** Considering the high costs of annotating training data by humans or stronger
proprietary models, a line of works relies on the correct responses generated by the model itself to update it. For
example, Zelikman et al. (2022); Yuan et al. (2023); Singh et al. (2023); Hosseini et al. (2024) filter solutions
according to the correctness of final answers, while Lightman et al. (2023); Lin et al. (2024) employ reward models
trained on gold annotations to score self-generated content. It is evident that, whether using binary labels or
fine-grained feedback, this paradigm still requires ground truth to assess the usability of the model’s self-generated
responses. Without ground truth answers, self-improvement leads to minimal performance gains and may even
degrade performance (Huang et al., 2023; Tyen et al., 2023).
**Semi-Supervised Learning** Gaining insights from semi-supervised learning within the domain of traditional
machine learning, another type of LLM learning depends not on extensive labeling but instead on a small, highquality seed dataset. Tong et al. (2024) have demonstrated improvement by learning differences between selfgenerated responses and expert-annotated responses. We also include the trending research topic of easy-to-hard
_generalization (Hase et al., 2024; Sun et al., 2024) in this category, where models are trained to tackle complex_
tasks by learning from human annotations on easier tasks. This series of research inevitably require access to a
small yet high quality set of standard answers.
-----
**Weak-to-Strong Learning** In scenarios where models surpass human capabilities, the challenge of providing
comprehensive and precise supervision for complex tasks intensifies, particularly as no ground truth exists, nor a
**superior model for supervisory guidance. This absence underscores the critical importance of weak-to-strong**
_learning approaches. Such methods uniquely leverage weaker supervisory signals to recover latent knowledge_
from already powerful models. For example, fine-tuning GPT-4 with a GPT-2-level supervisor can recover close
to GPT-3.5-level performance on certain tasks (Burns et al., 2023). This strategy holds profound implications
for advancing human societal progress by equipping LLMs with the capabilities to address currently unsolvable
mathematical and physical challenges. Unlike other learning paradigms, weak-to-strong learning operates under
comparatively relaxed conditions, opening expansive opportunities for exploration and innovation.
**2.2** **Weak-to-Strong Reasoning Setup**
In this paper, we address reasoning tasks in the weakto-strong setting, as illustrated in Tab. 2. First, we
|Role|weak model strong model task question|
|---|---|
examine mathematical reasoning tasks, such as those Llama2-7b GSM8K
in GSM8k and MATH. These tasks require each step + SFT(Dgold,1) _∈_ MATH
|Analogue|Llama2-7b Llama2-70b Q ∈GSM8K + SFT(Dgold,1) ∈MATH|
|---|---|
of the reasoning process to demonstrate fundamen
Table 2: Weak-to-Strong Reasoning Setup.
tal mathematical problem-solving skills, including
problem comprehension and algebraic operations, and
build upon the previous steps. It imposes higher demands on the model’s learning and generalization capabilities.
Unlike classification tasks, where models can rely on superficial pattern extrapolation or recognition, reasoning
tasks offer minimal benefit from guessing. Then, we use a weak model (e.g., Llama2-7b) with a certain degree
of mathematical problem-solving ability,[2] denoted as m. This model acts analogously to human supervisors with
limited expertise in the era of superintelligence. Besides, we only have a set of questions Q = {qi} without ground
truth answers and the goal is to improve the reasoning capability of a strong model M (e.g., Llama2-70b). To
implement this, following Burns et al. (2023), we randomly divide the original training set into two equal parts,
_Dgold,1 and Dgold,2. The weak model is initially fine-tuned using Dgold,1 where the gold solutions are available,_
resulting in a weak model with some problem-solving capability, i.e. m. In contrast, the strong model can only
access the questions from Dgold,2, without reasoning chains or final answers, i.e., Q.
**3** **Methodology**
In this section, we propose a weak-to-strong training method designed to maximize the use of weak data and to
elicit the strong model’s innate talent. First, we identify potentially positive samples in the absence of ground truth
and external signals. During Stage I, we exclusively utilize this subset of data for supervised fine-tuning. Then once
the strong model has achieved a certain level of reasoning proficiency, we employ the full weak data, particularly
the potentially negative samples in Stage II via preference learning-based approaches like DPO (Rafailov et al.,
2023), encouraging the strong model to learn from mistakes made by the weaker model. The whole framework is
depicted in Fig. 3.
**3.1** **Stage I: Learn from “Positive” Samples**
Given a weak model m and a series of math problems Q without ground truth, m generates weak data Dweak =
_{qi, cweak,i, aweak,i}, where qi ∈Q, cweak,i represents a reasoning chain, and aweak,i represents the final answer. The_
correctness of aweak,i is unknown. The central challenge is: how can we maximize the use of m and Dweak to fully
**enhance and recover the mathematical reasoning capabilities of a stronger model M?**
**3.1.1** **Full Weak Fine-Tuning**
Our initial strategy is to fine-tune the stronger model M across the entirety of the weak dataset Dweak. While prior
research (Burns et al., 2023) has validated the effectiveness of this approach in text classification tasks, its efficacy
in reasoning tasks remains unexplored. We have therefore embarked on an investigation to determine whether
the phenomenon of weak-to-strong generalization can also enhance the reasoning capabilities of M in this less
examined domain.
**3.1.2** **Weak In-Context Learning**
Another straightforward approach is in-context learning (ICL, Dong et al. (2023b)), which requires only several
training samples as demonstrations in the prompt. Specifically, we randomly select four samples from Dweak as
demonstrations. Since we do not have access to the ground truth, these demonstrations cannot be provably correct.
2Otherwise, the weak model can hardly provide useful supervision.
-----
Zero-shot CoT Filter
SFT
Final Answer
Reasoning Consistency
Problems SFT
In-context Learning
Filter
Few-shot CoT SFT
Zero-shot CoT Filter
SFT
Final Answer
Reasoning Consistency
Problems SFT
Filter
Zero-shot CoT SFT
Reasoning
Problems
Sample
solutions Confidence
Preference Optimization
Figure 3: Overview of our method evolving from M _→Mplus_ _→Mpro_ **. Left: we utilize final answer consistency to**
selectively filter weak and icl data from diverse sources, which is used to fine-tune the strong model M and obtain Mplus with
enhanced mathematical reasoning capabilities. Right: we leverage the confidence of Mplus to identify contrastive samples for
performance optimization, resulting in a more robust strong model Mpro.
**3.1.3** **Weak-ICL Fine-Tuning**
Given that models can mimic weak errors through supervised fine-tuning (Charikar et al., 2024; Lang et al., 2024),
we propose refining Dweak before use, instead of using all data blindly. Additionally, we seek to harness the innate
abilities of the strong model activated via in-context learning. Building on these two ideas, we introduce weak-icl
_fine-tuning, employing both weak data Dweak and “icl data” Dicl = {qi, cicl,i, aicl,i}, where qi ∈Q, cicl,i and aicl,i_
are generated by M with few-shot demonstrations,[3] as higher-quality supervision signals.
Note that, for both Dweak and Dicl, we cannot determine whether a certain answer is correct or not. Nonetheless,
when two models, employing distinct data representations, converge on the same answer in an open-ended task,
it is indicative of a higher likelihood of accuracy. This phenomenon supports the reliability of the results when
consistency is observed across different methodologies. We thus compare Dweak and Dicl generated by the weak
model and strong model, respectively, and select _D[ˆ]weak and_ _D[ˆ]icl if aweak,i = aicl,i, for subsequent supervised_
fine-tuning. We call this approach final answer consistency. Considering the combination of the two sets of data, we
can obtain three versions of enhanced fine-tuned strong models:
- Mweak-ft: M fine-tuned on _D[ˆ]weak._
- Micl-ft: M fine-tuned on _D[ˆ]icl._
- Mhybrid-ft: M fine-tuned on the union of _D[ˆ]weak and_ _D[ˆ]icl._
**Iterative Training** Upon closed examination of Mweak-ft and Micl-ft, we see that they still satisfy the condition
of having different data representations, as they are trained on data from different sources—D[ˆ]weak is generated by
the weak model, whereas _D[ˆ]icl primarily originates from the strong model itself. Hence, we can perform iterative_
training to bootstrap performance. We denote the initial round of supervised fine-tuning data as _D[ˆ]weak[1]_ [and][ ˆ]Dicl[1] [,]
resulting in models Mweak-ft[1] [,][ M][1]icl-ft[, and][ M][1]hybrid-ft[. In the second iteration, we obtain zero-shot solutions from]
_Mweak-ft[1]_ [applied to][ Q][ to construct][ D]weak[2] [, and those from][ M][1]icl-ft [to construct][ D]icl[2] [. Here, the subscripts “weak” and]
“icl” indicate the initial data source. Then we apply final answer consistency to obtain _D[ˆ]weak[2]_ [and][ ˆ]Dicl[2] [. Following]
another round of supervised fine-tuning, we have:
- Mweak-ft[2] [:][ M][ fine-tuned on][ ˆ]Dweak[2] [.]
- Micl-ft[2] [:][ M][ fine-tuned on][ ˆ]Dicl[2] [.]
- Mhybrid-ft[2] [:][ M][ fine-tuned on the union of][ ˆ]Dweak[2] [and][ ˆ]Dicl[2] [.]
3Experiments in §4.3 show that despite ICL being affected by demonstration selection, our method can achieves further
improvements accordingly beyond ICL.
-----
**QUESTION (qi): John has five more roommates than twice as many as Bob. If Bob has 10 roommates, how many**
roommates does John have?
**WEAK RESPONSE ({cweak,i, aweak,i}): John has 10+5=15 roommates. The answer is 15.**
**SELF RESPONSE 1 (** _c[1]strong,i[, a][1]strong,i[} ∈]_ _[A][+]strong,i[):][ Bob has 10 roommates. Twice as many as Bob is 2*10 = 20]_
_{_
roommates. John has 5 more roommates than twice as many as Bob, so John has 20+5 = 25 roommates. The answer is 25.
**SELF RESPONSE 2 (** _c[2]strong,i[, a][2]strong,i[} ∈]_ _[A][+]strong,i[):][ Let x be the number of roommates Bob has. John has 5 more]_
_{_
roommates than twice as many as Bob, so John has 2x+5 roommates. Bob has 10 roommates, so x=10. John has 2*10+5 =
25 roommates. The answer is 25.
Table 3: A real case example. Given a math question, the incorrect “weak response” is generated by m, while the two correct
“self responses” are sampled from A[+]strong,i [self-generated by][ M][plus][. Benefiting from dual solutions in the training data during]
Stage I, Mplus is able to generate different reasoning paths that converge to the same final answer. Through Stage II, Mplus
learns to avoid m’s error of overlooking the key word “twice” in calculations.
Note that the iterative training step is optional; it may lead to performance degradation when data quality is too
low or the model overfits.
**3.2** **Stage II: Learn from “Negative” Samples**
We denote the final iteration of Mhybrid-ft from Stage I as Mplus, which has learned dual mathematical solutions and
holds potential for further enhancement. Next, we apply preference optimization techniques to strategically utilize
the potentially erroneous subset of the original weak dataset Dweak = {qi, cweak,i, aweak,i} generated by m, which
allows the strong model to identify and avoid similar errors in future reasoning processes. The key factor lies in how
to construct contrastive samples for learning.
Without access to ground truth, the current strong model with enhanced reasoning capabilities identifies the most
likely correct answers based on its confidence. Specifically, for each question qi, we sample n responses from
_∈Q_
_Mplus, and define the probability of the answer that appears most frequently among these responses as confidence._
When the confidence falls below a specified threshold τ, we consider the model’s judgment on this question
unreliable and therefore discard it. Conversely, if the confidence is no less than τ, we regard the model as capable of
solving the question and proceed to construct contrastive samples as follows.
- For a question qi where Mplus is confident, we denote the most confident answer as a[+]strong,i [and][ P] [(][a]strong[+] _,i[)][ ≥]_ _[τ]_ [.]
It can be considered as the “correct” answer according to Mplus. For instance, if we set τ = 0.6 and 8 out of 10
sampled responses have the same final answer “4.2”, we say that Mplus considers “4.2” to be the correct answer
to this question, i.e. a[+]strong,i [= 4][.][2][.]
- Then we divide the sampled n responses of Mplus to qi into two sets: A[+]strong,i [=][ {][c]strong[j] _,i[, a][j]strong,i[}][ where]_
_a[j]strong,i_ [=][ a]strong[+] _,i[;][ A][−]strong,i_ [=][ {][c]strong[k] _,i[, a][k]strong,i[}][ where][ a][k]strong,i_ _[̸][=][ a]strong[+]_ _,i[. In the above example,][ |][A][+]strong,i[|][ = 8]_
and _A[−]strong,i[|][ = 2][.]_
_|_
- If the weak model holds an answer that the enhanced model considers “correct”, that is, aweak,i = a[+]strong,i[, we]
treat the weak model’s response {cweak,i, aweak,i} as chosen response and randomly select a rejected response
from A[−]strong,i[. Otherwise, if][ a][weak][,i][ ̸][=][ a][+]strong,i[, we treat][ {][c][weak][,i][, a][weak][,i][}][ as rejected response and randomly select]
a chosen response from A[+]strong,i[. Examples are shown in Tab.][ 3][.]
Further training Mplus on these samples enables it to distinguish between correct and incorrect solutions, leading
to a stronger model Mpro.
**4** **Experiments**
**4.1** **Datasets**
GSM8K (Cobbe et al., 2021) and MATH (Hendrycks
et al., 2021) are two widely used datasets for mathematical reasoning, and MATH comprises more challenging competition problems. The data statistics we
use are presented in Tab. 4. Particularly, to ensure
a sufficient amount of training data for developing
preliminary mathematical skills in the weak model,
we augment the GSM8K training set with the data
constructed by Chern et al. (2023). Further details are
available in §A.1.
**# Dgold,1** **# Dgold,2** **# Test**
GSM8K 7,000 7,000 1,319
MATH 6,000 6,000 500
Table 4: Data Statistics. Dgold,1 and Dgold,2 are subsets of the
training set. The weak model uses Dgold,1 to cultivate initial
mathematical skills, while the strong model can only access
questions from Dgold,2 without ground truths.
-----
Llama2-7b GSM8K
Gemma-2b GSM8K
Mistral-7b GSM8K
80
90
80
70
60
90
80
70
60
50
40
30
40
30
70
65
50
40
40
30
15
60
Iter. 0 (Baseline) Iter. 1 (Ours) Iter. 2 (Ours)
Iter. 0 (Baseline) Iter. 1 (Ours) Iter. 2 (Ours)
Iter. 0 (Baseline) Iter. 1 (Ours) Iter. 2 (Ours)
94.16
86.28
81.05
71.95
63.76
61.71
59.06 62.62
59.44
57.47
53.68
42.38
33.81
94.16
85.14
80.14
71.95
59.21 60.12
56.48
56.03
53.22
45.03
42.91
29.04
25.17
94.16
88.70
71.95
68.08 68.39
66.26 66.64
66.72
65.20
61.56
61.33
59.51
Llama2-7b MATH
Iter. 0 (Baseline) Iter. 1 (Ours)
30
43.40
15
10
15
10
10
Iter. 0 (Baseline) Iter. 1 (Ours)
Iter. 0 (Baseline) Iter. 1 (Ours)
|43.40 Gemma-2b MATH|Col2|
|---|---|
|38.40 31.60 17.80||
|||
|||
|15.60 14.20 12.60 11.60||
|10.80 10.00||
|43.40 Mistral-7b MATH|Col2|
|---|---|
|39.60 38.80 17.80||
|||
|||
|14.80 14.80 14.40 13.60||
|13.60 11.80||
33.60
17.80
14.00
11.40 11.80
10.80
5.80 4.40
weak weak(pass@10) icl icl(pass@10) hybrid hybrid(pass@10) weak floor strong ceiling strong ceiling(pass@10)
Figure 4: Main results of Stage I. “Iter. 0” presents the performance of two baselines, where “weak” indicates full weak
fine-tuning, i.e., naively fine-tuning on the entire weak data, and “icl” refers to weak ICL without fine-tuning. Models connected
by a line mean that they share the same training data sources. Results below “strong ceiling” present test accuracy via greedy
decoding, while those above show pass@k scores (k = 10 and temperature = 1.0). For simplicity, we only present the pass@k
scores of Mhybrid-ft and checkpoints that surpass it using greedy decoding, and full results are provided in §A.4.2.
**4.2** **Experiment Settings**
We use Llama2-70b as the strong model and employ three weak models from different families: Llama2-7b,
Gemma-2b, and Mistral-7b. We apply full parameter fine-tuning to the weak models on Dgold,1, and consistently
adopt LoRA (Hu et al., 2022) to fine-tune the strong model. In Stage I, we perform two rounds of iterations on
GSM8K and one round on MATH according to the principles of iteration outlined in §3.1. In Stage II, we adopt
two preference learning-based approaches, DPO (Rafailov et al., 2023) and its variant ORPO (Hong et al., 2024).
Details are provided in §A.2.
We evaluate the accuracy on the test set. The performance of the weak model m is defined as the “weak floor”.
The performance of the strong model M, fine-tuned with data containing gold solutions from Dgold,2, is termed the
“strong ceiling”. It represents the upper limit of the capabilities that the strong model can achieve with Dgold,2.
**4.3** **Results of Stage I**
The main results of Stage I on both GSM8K and MATH datasets are depicted in Fig. 4. Notably, in the MATH
experiments, we randomly sample additional data that is not chosen based on the final answer consistency, due to the
small amount available. Please refer to §A.4.1 for details. According to Fig. 4, we have the following observations.
**Weak-ICL fine-tuning demonstrates a notable enhancement. Using our proposed method, the performance**
of the strong model, supervised only by the weak Gemma-2b with 25.17 accuracy on GSM8K (without any
gold answers), can be improved up to 60.12, surpassing naive full weak fine-tuning by 31.08, and Mplus (i.e.,
_Mhybrid-ft[2]_ [) outperforms it by 26.99. This verifies the effectiveness of data refining before supervised fine-tuning.]
Also, experimental results show that the mathematical reasoning capabilities of the strong model are increasingly
recovered as the weak model improves, a conclusion verified by Liu and Alahi (2024) on classification tasks.
In detail, the performance on GSM8K gradually improves for Gemma-2b, Llama-7b, and Mistral-7b (25.17 →
33.81 → 59.51). Hence, the maximum performance of the strong model, trained with data generated by these
models, also progressively enhances (60.12 → 63.76 → 68.39).
_Mhybrid-ft achieves the highest pass@k scores. As expected, Mhybrid-ft achieves the highest pass@k scores in_
the weak-to-strong setting, benefiting from its training data that incorporates two types of solutions—one from the
weak model, and another from the strong model. This diversity enhances the robustness of the model by reducing
the likelihood of overfitting. Additionally, the performance of Micl-ft generally surpasses that of Mweak-ft, which
can be attributed to variations in process-level accuracy and possibly the solution format. Detailed analyses are
conducted in §A.3.
-----
**Naive fine-tuning is inadequate for weak-to-strong reasoning. When using Gemma-2b as the weak model on**
the MATH dataset, full weak fine-tuning underperforms compared to the weak floor (10.0 v.s. 11.6). This indicates
that naive fine-tuning, though successfully applied to classification, chess, and reward modeling tasks (Burns et al.,
2023), falls short for intricate reasoning tasks, particularly those of substantial difficulty like questions in MATH. In
contrast, our weak-icl fine-tuning method effectively bridges the gap, offering an effective and scalable solution for
the weak-to-strong reasoning challenge.
**Effect of ICL Performance** Given that the efficacy
of weak-icl fine-tuning partially depends on the ef- 65 64.75
fectiveness of weak ICL, we further explore how en- 64.06
hancing ICL performance through careful selection of
demonstrations affects the performance of weak-icl
fine-tuning. Fig. 5 shows the test accuracy on GSM8K
using Gemma-2b as the weak model under a different
60
set of demonstrations. 59.21
The results indicate that the performance of weak
Test Accuracy (%)
ICL with this particular group of demonstrations increases from the original 56.48 to 64.06. We then regenerate Dicl with these demonstrations in the prompt 55
and fine-tune the strong model on _D[ˆ]icl, which is selec-_ Weak ICL Micl-ft[1]
tively curated through final answer consistency. This
further improves performance from 64.06 to 64.75,
confirming the utility of self-directed data curation. It and are under original demonstrations, and and are
is worth noting that although weak ICL holds the po- under carefully selected demonstrations.
tential for high performance, the selection of effective
demonstrations in a weak-to-strong framework is a non-trivial thing, and is beyond the scope of this paper.
**4.4** **Results of Stage II**
As discussed in §3.2, we employ the final iteration of
_Mhybrid-ft as Mplus for subsequent preference learn-_ **Weak Model**
ing. The experimental results in §4.3 validate this I II. DPO II. ORPO
checkpoint achieves higher pass@k and possesses **GSM8K**
significant potential for further refinement. Llama2-7b 62.62 66.19 (+3.57) 68.16 (+5.54)
As shown in Tab. 5, our method for construct- Gemma-2b 56.03 64.52 (+8.49) 63.91 (+7.88)
ing positive and negative samples effectively enhances the strong model’s math reasoning capabil- **MATH**
ities. On GSM8K, both DPO and ORPO consis- Llama2-7b 14.00 12.00 (-2.00) 15.00 (+1.00)
tently achieve significant improvements using our constructed datasets, notably resulting in an increase of
|64.75 64.06 59.21 56.48|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
|ults o er ori select nd is b|Weak n GS ginal ed de eyo|ICL M8K dem mon nd th|super onstra stratio e sco|M1 i vised tions ns. pe of|cl-ft by, and this||
|el|||Test|Accu|racy||
||I||II. DP|O||II. ORPO|
|6 5 6|2.62 6.03 8.39|66 64 70|.19 (+ .52 (+ .96 (+|3.57) 8.49) 2.57)|6 6 7|8.16 (+5.54) 3.91 (+7.88) 2.18 (+3.79)|
|1 1 1|4.00 4.20 4.80|1 1 1|2.00 (-2 1.60 (-2 3.40 (-1|.00) .60) .40)|1 1 1|5.00 (+1.00) 6.00 (+1.80) 7.00 (+2.20)|
8.49 points when supervised by Gemma-2b. Despite
Table 5: Main results of Stage II.
the inherently challenging nature of MATH problem,
which compromises the strong model’s judgment and introduces inaccuracies in the training data, our method still
achieves improvements on MATH through ORPO by at least 1 point.[4]
**Data Construction Recipe** When constructing preference data, we always use weak responses generated by the
weak model as one of the chosen/rejected responses, instead of relying exclusively on self-generated data. We also
test the self-generated setting on GSM8K using Llama2-7b as the weak model, where both chosen and rejected
responses are generated by the strong model itself. The DPO test accuracy in this setting is 62.40 (-0.22), indicating
a slight performance degradation. Without ground truth, the constructed positive and negative samples actually
correspond to the more frequently and less frequently occurring answers, respectively, and are related to the answers
the model tends to choose. Since preference optimization essentially performs ranking, the potential benefit of this
self-generated setting is minimal. Therefore, incorporating weak data signals in the preference data construction
process proves to be a better approach.
**4.5** **Analysis**
For further analysis, we examine the accuracy across different difficulty levels in the MATH test set (See §A.1.2 for
data statistics).
4Pang et al. (2024); Xu et al. (2024); Yuan et al. (2024) demonstrate that DPO can cause performance degradation on MATH
due to the lack of regularization in its loss.
-----
Llama2-7b Gemma-2b Mistral-7b
50 50 50 strong ceiling
weak floor
40 40 40
Mplus
30 30 30 Mpro
20 20 20
Test Accuracy (%)
10 10 10
0 0 0
L1 L2 L3 L4 L5 L1 L2 L3 L4 L5 L1 L2 L3 L4 L5
Difficulty Level Difficulty Level Difficulty Level
Figure 6: Test accuracy across varying difficulty levels on the MATH test set. We use ORPO to obtain Mpro.
As shown in Fig. 6, the strong model exhibits better generalization on easier problems. Specifically, even though
Llama2-7b achieves only 6.98 points accuracy on level 1 problems, Llama2-70b can achieve an accuracy exceeding
30 points after training using this weak supervision. For more challenging problems (levels 4-5), Mpro, enhanced
with ORPO, even surpasses the strong ceiling obtained by supervised fine-tuning solely on gold solutions. This
phenomenon serves to validate the effectiveness of learning from incorrect data.
**4.6** **Experiments Closer to Future Scenarios**
In preliminary tests with Llama3-70b (AI@Meta, 2024), we ob
**Test Accuracy**
serve that on GSM8K and MATH, Llama3-70b can largely unlock
its potential through in-context learning, with marginal or even ad- Weak Floor 11.82
verse impacts from parameter updates due to training instabilities. Full Weak FT 12.46
Weak ICL 8.63
Consequently, we focus on a more challenging dataset developed
after the release of Llama3-70b, OlympicArena (Huang et al., _Mweak-ft[1]_ 12.78
2024), to simulate a more realistic future scenario. _Micl-ft[1]_ 9.58
We only consider English questions in OlympicArena, exclud- _Mhybrid-ft[1]_ 11.18
ing the CODE (Code Generation) and OT (Others) problem types _Mweak-ft[2]_ 13.10
icl-ft 11.50
that require case-based or expert evaluation. This results in 6,020 _M[2]_
training data without solutions and final answers, and 313 test _Mhybrid-ft[2]_ [(][M][plus][)] 11.82
pro **15.65**
data with final answers to assess the performance of different _M_
methods. We use Llama3-8b-instruct (without initial fine-tuning
Table 6: Results on OlympicArena using Llama3
on a subset of training data) as the weak model and Llama3-70b
_family. The best result is in bold, and the best result_
as the strong model to be improved. The hyperparameters are of supervised fine-tuning in underlined.
consistent with those used for GSM8K. This configuration more
closely resembles future real-world weak-to-strong scenarios.
Experimental results are displayed in Tab. 6. “Weak Floor” represents the zero-shot performance of Llama38b-instruct, “Full Weak FT” denotes the performance of Llama3-70b after supervised fine-tuning on the full set
(i.e, 6,020) of weak solutions generated by Llama3-8b-instruct on the training set, and “Weak ICL” indicates the
performance of Llama3-70b under 4-shot weak demonstrations generated by Llama3-8b-instruct. Despite having
more parameters, Llama3-70b under in-context learning still performs lower than the zero-shot performance of
Llama3-8b-instruct due to insufficient mining capabilities.
_Mweak-ft[1]_ [, obtained by our proposed weak-icl fine-tuning method, achieves higher performance than Full Weak]
FT with fewer training data (i.e., 746), outperforming it by 0.32 points. After the second stage of preference
optimization, which further exploits the weak model and training questions without answers, the strong model’s
performance is improved by an additional 3.19 points over Full Weak FT. This demonstrates the robustness and
generalizability of our method in scenarios closer to future conditions.
**5** **Related Work**
**5.1** **LLM Training**
LLMs can enhance their ability to solve tasks and better align with human instructions through a supervised
fine-tuning (SFT) phase (Zhang et al., 2023; Dong et al., 2023a; Lv et al., 2023b,a). This phase heavily relies on the
quality of training data, as previous studies (Zhou et al., 2023a; Wang et al., 2023b) demonstrate that higher data
quality translates to substantial gains in model performance. In this paper, we investigate the potential of learning
from weak supervisions.
-----
To further align LLMs with human values and enable learning from both positive and negative feedback, additional
training is required, such as reinforcement learning from human feedback (RLHF, Ouyang et al. (2022); Bai et al.
(2022)) and direct preference optimization (DPO, Rafailov et al. (2023)). In particular, DPO reparameterizes reward
functions in RLHF and has been widely used due to its simplicity. Several variants of DPO have then emerged
to further enhance its stability and performance, such as ORPO (Hong et al., 2024) and SimPO (Meng et al.,
2024), etc. This paper explores the capabilities of DPO and ORPO using our constructed contrastive samples in a
weak-to-strong setting.
**5.2** **Mathematical Reasoning**
The exploration of mathematical reasoning within LLMs has been a focal point for evaluating their cognitive
capabilities akin to human reasoning (Qiao et al., 2023; Lu et al., 2023). Researchers have developed various
methods to enhance the mathematical reasoning capabilities of LLMs after pre-training, which can be broadly
classified into two categories: (1) Prompting: Some works (Kojima et al., 2022; Wei et al., 2022; Zhou et al., 2023b;
Liu et al., 2023) aims to elicit the intrinsic reasoning abilities of LLMs by specific prompting engineering, without
updating the model parameters; (2) Fine-tuning: Another line of studies focuses on generating a more extensive
and higher-quality collection of question-answer pairs (Yu et al., 2023; Wang et al., 2023c,a). Through supervised
fine-tuning and preference optimization (Luo et al., 2023; Azerbayev et al., 2023; Mitra et al., 2024; Xu et al., 2024),
the models can achieve significant improvements in their mathematical problem-solving capabilities.
**6** **Conclusion**
In this paper, we explore the efficacy of weak-to-strong framework in complex reasoning tasks. We introduce a new
method that elicits strong capabilities using weak supervisions, without relying on annotations from humans or
more advanced models. This method focuses on the strong model’s ability to autonomously refine its training data,
even if it has not learned the task before. By iteratively expanding its learning scope, the strong model continuously
broadens its reasoning skills. This self-directed data curation is crucial for scaling up the enhancement of AI
reasoning capabilities, making the model more independent and effective in its developmental trajectory. Through
this work, we seek to illuminate new pathways for AI development, emphasizing the critical role of innovative
model supervision in advancing AGI and beyond.
**Limitations**
In our experiments, we use Llama2-70b and Llama3-70b as a proxy for hypothetical superintelligent models of the
future. We acknowledge that there might be performance discrepancies compared to a genuine future advanced
model. Nonetheless, our efforts lay the groundwork for investigating methodologies in weak-to-strong reasoning.
Additionally, this paper does not explore supervision at the process level, such as the model’s ability to learn from
partially correct data (Ni et al., 2023; Lightman et al., 2023). In the weak-to-strong scenario, the presence of
non-negligible errors and noise at the process level yields only limited performance improvements in our early
experiments, thereby posing challenges for future research.
**Acknowledgements**
We sincerely thank Xuefeng Li, Haoyang Zou, and Ting Wu for their valuable insights during discussions, which
greatly enhance the quality of this work.
**References**
[1] Rishabh Agarwal, Nino Vieillard, Piotr Stanczyk, Sabela Ramos, Matthieu Geist, and Olivier Bachem. 2023.
[GKD: generalized knowledge distillation for auto-regressive sequence models. CoRR, abs/2306.13649.](https://doi.org/10.48550/ARXIV.2306.13649)
[[2] AI@Meta. 2024. Llama 3 model card.](https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md)
[[3] Sam Altman, Greg Brockman, and Ilya Sutskever. 2023. Governance of superintelligence. https://openai.](https://openai.com/index/governance-of-superintelligence/)
[com/index/governance-of-superintelligence/.](https://openai.com/index/governance-of-superintelligence/)
[[4] Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou, and Weizhu Chen. 2023. Learning from](https://doi.org/10.48550/ARXIV.2310.20689)
[mistakes makes LLM better reasoner. CoRR, abs/2310.20689.](https://doi.org/10.48550/ARXIV.2310.20689)
[5] Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, Marco Dos Santos, Stephen McAleer, Albert Q. Jiang, Jia
[Deng, Stella Biderman, and Sean Welleck. 2023. Llemma: An open language model for mathematics. CoRR,](https://doi.org/10.48550/ARXIV.2310.10631)
abs/2310.10631.
10
-----
[6] Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen,
Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny
Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller,
Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosiute, Liane Lovitt, Michael Sellitto, Nelson Elhage,
Nicholas Schiefer, Noem´ı Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston,
Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom
Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph,
[Sam McCandlish, Tom Brown, and Jared Kaplan. 2022. Constitutional AI: harmlessness from AI feedback.](https://doi.org/10.48550/ARXIV.2212.08073)
_CoRR, abs/2212.08073._
[7] Samuel R. Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, Kamile Lukosiute,
Amanda Askell, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Christopher
Olah, Daniela Amodei, Dario Amodei, Dawn Drain, Dustin Li, Eli Tran-Johnson, Jackson Kernion, Jamie Kerr,
Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Liane Lovitt, Nelson Elhage, Nicholas Schiefer,
Nicholas Joseph, Noem´ı Mercado, Nova DasSarma, Robin Larson, Sam McCandlish, Sandipan Kundu, Scott
Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Timothy Telleen-Lawton, Tom Brown, Tom Henighan,
[Tristan Hume, Yuntao Bai, Zac Hatfield-Dodds, Ben Mann, and Jared Kaplan. 2022. Measuring progress on](https://doi.org/10.48550/ARXIV.2211.03540)
[scalable oversight for large language models. CoRR, abs/2211.03540.](https://doi.org/10.48550/ARXIV.2211.03540)
[8] Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yin[ing Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, Ilya Sutskever, and Jeff Wu. 2023. Weak-to-strong](https://doi.org/10.48550/ARXIV.2312.09390)
[generalization: Eliciting strong capabilities with weak supervision. CoRR, abs/2312.09390.](https://doi.org/10.48550/ARXIV.2312.09390)
[9] Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang
[Wang, Yidong Wang, Wei Ye, Yue Zhang, Yi Chang, Philip S. Yu, Qiang Yang, and Xing Xie. 2023. A survey on](https://doi.org/10.48550/ARXIV.2307.03109)
[evaluation of large language models. CoRR, abs/2307.03109.](https://doi.org/10.48550/ARXIV.2307.03109)
[[10] Moses Charikar, Chirag Pabbaraju, and Kirankumar Shiragur. 2024. Quantifying the gain in weak-to-strong](http://arxiv.org/abs/2405.15116)
[generalization.](http://arxiv.org/abs/2405.15116)
[11] Kai Chen, Chunwei Wang, Kuo Yang, Jianhua Han, Lanqing Hong, Fei Mi, Hang Xu, Zhengying Liu, Wenyong
[Huang, Zhenguo Li, Dit-Yan Yeung, Lifeng Shang, Xin Jiang, and Qun Liu. 2023. Gaining wisdom from](https://doi.org/10.48550/ARXIV.2310.10477)
[setbacks: Aligning large language models via mistake analysis. CoRR, abs/2310.10477.](https://doi.org/10.48550/ARXIV.2310.10477)
[12] Ethan Chern, Haoyang Zou, Xuefeng Li, Jiewen Hu, Kehua Feng, Junlong Li, and Pengfei Liu. 2023. Generative
[ai for math: Abel. https://github.com/GAIR-NLP/abel.](https://github.com/GAIR-NLP/abel)
[[13] Paul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep](https://proceedings.neurips.cc/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html)
[reinforcement learning from human preferences. In Advances in Neural Information Processing Systems 30:](https://proceedings.neurips.cc/paper/2017/hash/d5e2c0adad503c91f91df240d0cd4e49-Abstract.html)
_Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA,_
pages 4299–4307.
[14] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert,
[Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. 2021. Training verifiers](http://arxiv.org/abs/2110.14168)
[to solve math word problems. CoRR, abs/2110.14168.](http://arxiv.org/abs/2110.14168)
[15] Guanting Dong, Hongyi Yuan, Keming Lu, Chengpeng Li, Mingfeng Xue, Dayiheng Liu, Wei Wang, Zheng
[Yuan, Chang Zhou, and Jingren Zhou. 2023a. How abilities in large language models are affected by supervised](https://doi.org/10.48550/ARXIV.2310.05492)
[fine-tuning data composition. CoRR, abs/2310.05492.](https://doi.org/10.48550/ARXIV.2310.05492)
[16] Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, Lei Li, and
[Zhifang Sui. 2023b. A survey for in-context learning. CoRR, abs/2301.00234.](https://doi.org/10.48550/ARXIV.2301.00234)
[17] Run-Ze Fan, Xuefeng Li, Haoyang Zou, Junlong Li, Shwai He, Ethan Chern, Jiewen Hu, and Pengfei Liu. 2024.
[Reformatted alignment. CoRR, abs/2402.12219.](https://doi.org/10.48550/ARXIV.2402.12219)
[18] Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and
[Dawn Song. 2023. The false promise of imitating proprietary llms. CoRR, abs/2305.15717.](https://doi.org/10.48550/ARXIV.2305.15717)
[[19] Peter Hase, Mohit Bansal, Peter Clark, and Sarah Wiegreffe. 2024. The unreasonable effectiveness of easy](https://doi.org/10.48550/ARXIV.2401.06751)
[training data for hard tasks. CoRR, abs/2401.06751.](https://doi.org/10.48550/ARXIV.2401.06751)
[20] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob
[Steinhardt. 2021. Measuring mathematical problem solving with the MATH dataset. In Proceedings of the](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
_Neural Information Processing Systems Track on Datasets and Benchmarks 1, NeurIPS Datasets and Benchmarks_
_2021, December 2021, virtual._
[[21] Jiwoo Hong, Noah Lee, and James Thorne. 2024. ORPO: monolithic preference optimization without reference](https://doi.org/10.48550/ARXIV.2403.07691)
[model. CoRR, abs/2403.07691.](https://doi.org/10.48550/ARXIV.2403.07691)
11
-----
[22] Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron C. Courville, Alessandro Sordoni, and Rishabh Agarwal.
[2024. V-star: Training verifiers for self-taught reasoners. CoRR, abs/2402.06457.](https://doi.org/10.48550/ARXIV.2402.06457)
[23] Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu
[Chen. 2022. Lora: Low-rank adaptation of large language models. In The Tenth International Conference on](https://openreview.net/forum?id=nZeVKeeFYf9)
_Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net._
[[24] Jie Huang and Kevin Chen-Chuan Chang. 2023. Towards reasoning in large language models: A survey. In](https://doi.org/10.18653/V1/2023.FINDINGS-ACL.67)
_Findings of the Association for Computational Linguistics: ACL 2023, Toronto, Canada, July 9-14, 2023, pages_
1049–1065. Association for Computational Linguistics.
[25] Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny
[Zhou. 2023. Large language models cannot self-correct reasoning yet. CoRR, abs/2310.01798.](https://doi.org/10.48550/ARXIV.2310.01798)
[26] Zhen Huang, Zengzhi Wang, Shijie Xia, Xuefeng Li, Haoyang Zou, Ruijie Xu, Run-Ze Fan, Lyumanshan Ye,
Ethan Chern, Yixin Ye, Yikai Zhang, Yuqing Yang, Ting Wu, Binjie Wang, Shichao Sun, Yang Xiao, Yiyuan
Li, Fan Zhou, Steffi Chern, Yiwei Qin, Yan Ma, Jiadi Su, Yixiu Liu, Yuxiang Zheng, Shaoting Zhang, Dahua
[Lin, Yu Qiao, and Pengfei Liu. 2024. Olympicarena: Benchmarking multi-discipline cognitive reasoning for](http://arxiv.org/abs/2406.12753)
[superintelligent ai.](http://arxiv.org/abs/2406.12753)
[27] Albert Q. Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego
de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, Lelio Renard Lavaud,´
Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothee Lacroix, and´
[William El Sayed. 2023. Mistral 7b. CoRR, abs/2310.06825.](https://doi.org/10.48550/ARXIV.2310.06825)
[[28] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language](http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html)
[models are zero-shot reasoners. In Advances in Neural Information Processing Systems 35: Annual Conference on](http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html)
_Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December_
_9, 2022._
[[29] Hunter Lang, David Sontag, and Aravindan Vijayaraghavan. 2024. Theoretical analysis of weak-to-strong](http://arxiv.org/abs/2405.16043)
[generalization.](http://arxiv.org/abs/2405.16043)
[30] Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune,
[and Abhinav Rastogi. 2023. RLAIF: scaling reinforcement learning from human feedback with AI feedback.](https://doi.org/10.48550/ARXIV.2309.00267)
_CoRR, abs/2309.00267._
[31] Hunter Lightman, Vineet Kosaraju, Yura Burda, Harrison Edwards, Bowen Baker, Teddy Lee, Jan Leike, John
[Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let’s verify step by step. CoRR, abs/2305.20050.](https://doi.org/10.48550/ARXIV.2305.20050)
[32] Zhenghao Lin, Zhibin Gou, Yeyun Gong, Xiao Liu, Yelong Shen, Ruochen Xu, Chen Lin, Yujiu Yang, Jian
[Jiao, Nan Duan, and Weizhu Chen. 2024. Rho-1: Not all tokens are what you need. CoRR, abs/2404.07965.](https://doi.org/10.48550/ARXIV.2404.07965)
[33] Tengxiao Liu, Qipeng Guo, Yuqing Yang, Xiangkun Hu, Yue Zhang, Xipeng Qiu, and Zheng Zhang. 2023.
[Plan, verify and switch: Integrated reasoning with diverse x-of-thoughts. In Proceedings of the 2023 Conference](https://doi.org/10.18653/V1/2023.EMNLP-MAIN.169)
_on Empirical Methods in Natural Language Processing, EMNLP 2023, Singapore, December 6-10, 2023, pages_
2807–2822. Association for Computational Linguistics.
[[34] Yuejiang Liu and Alexandre Alahi. 2024. Co-supervised learning: Improving weak-to-strong generalization](https://doi.org/10.48550/ARXIV.2402.15505)
[with hierarchical mixture of experts. CoRR, abs/2402.15505.](https://doi.org/10.48550/ARXIV.2402.15505)
[[35] Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and Kai-Wei Chang. 2023. A survey of deep learning for](https://doi.org/10.18653/V1/2023.ACL-LONG.817)
[mathematical reasoning. In Proceedings of the 61st Annual Meeting of the Association for Computational](https://doi.org/10.18653/V1/2023.ACL-LONG.817)
_Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 14605–14631._
Association for Computational Linguistics.
[36] Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin,
[Shifeng Chen, and Dongmei Zhang. 2023. Wizardmath: Empowering mathematical reasoning for large language](https://doi.org/10.48550/ARXIV.2308.09583)
[models via reinforced evol-instruct. CoRR, abs/2308.09583.](https://doi.org/10.48550/ARXIV.2308.09583)
[[37] Kai Lv, Hang Yan, Qipeng Guo, Haijun Lv, and Xipeng Qiu. 2023a. Adalomo: Low-memory optimization with](https://doi.org/10.48550/ARXIV.2310.10195)
[adaptive learning rate. CoRR, abs/2310.10195.](https://doi.org/10.48550/ARXIV.2310.10195)
[[38] Kai Lv, Yuqing Yang, Tengxiao Liu, Qinghui Gao, Qipeng Guo, and Xipeng Qiu. 2023b. Full parameter](https://doi.org/10.48550/ARXIV.2306.09782)
[fine-tuning for large language models with limited resources. CoRR, abs/2306.09782.](https://doi.org/10.48550/ARXIV.2306.09782)
[[39] Yu Meng, Mengzhou Xia, and Danqi Chen. 2024. Simpo: Simple preference optimization with a reference-free](http://arxiv.org/abs/2405.14734)
[reward.](http://arxiv.org/abs/2405.14734)
12
-----
[40] Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak, Laurent Sifre, Morgane
Riviere, Mihir Sanjay Kale, Juliette Love, Pouya Tafti, L` eonard Hussenot, Aakanksha Chowdhery, Adam Roberts,´
Aditya Barua, Alex Botev, Alex Castro-Ros, Ambrose Slone, Amelie H´ eliou, Andrea Tacchetti, Anna Bulanova,´
Antonia Paterson, Beth Tsai, Bobak Shahriari, Charline Le Lan, Christopher A. Choquette-Choo, Clement Crepy,´
Daniel Cer, Daphne Ippolito, David Reid, Elena Buchatskaya, Eric Ni, Eric Noland, Geng Yan, George Tucker,
George-Cristian Muraru, Grigory Rozhdestvenskiy, Henryk Michalewski, Ian Tenney, Ivan Grishchenko, Jacob
Austin, James Keeling, Jane Labanowski, Jean-Baptiste Lespiau, Jeff Stanway, Jenny Brennan, Jeremy Chen,
[Johan Ferret, Justin Chiu, and et al. 2024. Gemma: Open models based on gemini research and technology.](https://doi.org/10.48550/ARXIV.2403.08295)
_CoRR, abs/2403.08295._
[[41] Arindam Mitra, Hamed Khanpour, Corby Rosset, and Ahmed Awadallah. 2024. Orca-math: Unlocking the](https://doi.org/10.48550/ARXIV.2402.14830)
[potential of slms in grade school math. CoRR, abs/2402.14830.](https://doi.org/10.48550/ARXIV.2402.14830)
[42] Ansong Ni, Jeevana Priya Inala, Chenglong Wang, Alex Polozov, Christopher Meek, Dragomir Radev, and
[Jianfeng Gao. 2023. Learning math reasoning from self-sampled correct and partially-correct solutions. In The](https://openreview.net/pdf?id=4D4TSJE6-K)
_Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023._
OpenReview.net.
[[43] OpenAI. 2023. GPT-4 technical report. CoRR, abs/2303.08774.](https://doi.org/10.48550/ARXIV.2303.08774)
[44] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie
[Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language](http://papers.nips.cc/paper_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html)
[models to follow instructions with human feedback. In Advances in Neural Information Processing Systems 35:](http://papers.nips.cc/paper_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html)
_Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA,_
_November 28 - December 9, 2022._
[45] Richard Yuanzhe Pang, Weizhe Yuan, Kyunghyun Cho, He He, Sainbayar Sukhbaatar, and Jason Weston. 2024.
[Iterative reasoning preference optimization. CoRR, abs/2404.19733.](https://doi.org/10.48550/ARXIV.2404.19733)
[[46] Arjun Panickssery, Samuel R. Bowman, and Shi Feng. 2024. LLM evaluators recognize and favor their own](https://doi.org/10.48550/ARXIV.2404.13076)
[generations. CoRR, abs/2404.13076.](https://doi.org/10.48550/ARXIV.2404.13076)
[[47] Baolin Peng, Chunyuan Li, Pengcheng He, Michel Galley, and Jianfeng Gao. 2023. Instruction tuning with](https://doi.org/10.48550/ARXIV.2304.03277)
[GPT-4. CoRR, abs/2304.03277.](https://doi.org/10.48550/ARXIV.2304.03277)
[[48] Gokul Puthumanaillam, Manav Vora, Pranay Thangeda, and Melkior Ornik. 2024. A moral imperative: The](https://doi.org/10.48550/ARXIV.2403.14683)
[need for continual superalignment of large language models. CoRR, abs/2403.14683.](https://doi.org/10.48550/ARXIV.2403.14683)
[49] Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, and
[Huajun Chen. 2023. Reasoning with language model prompting: A survey. In Proceedings of the 61st Annual](https://doi.org/10.18653/V1/2023.ACL-LONG.294)
_Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada,_
_July 9-14, 2023, pages 5368–5393. Association for Computational Linguistics._
[50] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D. Manning, Stefano Ermon, and Chelsea Finn.
[2023. Direct preference optimization: Your language model is secretly a reward model. In Advances in](http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html)
_Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023,_
_NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023._
[[51] Xuan Ren, Biao Wu, and Lingqiao Liu. 2024. I learn better if you speak my language: Enhancing large language](https://doi.org/10.48550/ARXIV.2402.11192)
[model fine-tuning with style-aligned response adjustments. CoRR, abs/2402.11192.](https://doi.org/10.48550/ARXIV.2402.11192)
[[52] Christian P. Robert. 2017. Superintelligence: Paths, dangers, strategies. CHANCE, 30:42 – 43.](https://api.semanticscholar.org/CorpusID:63827220)
[53] Jitao Sang, Yuhang Wang, Jing Zhang, Yanxu Zhu, Chao Kong, Junhong Ye, Shuyu Wei, and Jinlin Xiao. 2024.
[Improving weak-to-strong generalization with scalable oversight and ensemble learning. CoRR, abs/2402.00667.](https://doi.org/10.48550/ARXIV.2402.00667)
[54] Avi Singh, John D. Co-Reyes, Rishabh Agarwal, Ankesh Anand, Piyush Patil, Xavier Garcia, Peter J. Liu, James
Harrison, Jaehoon Lee, Kelvin Xu, Aaron Parisi, Abhishek Kumar, Alex Alemi, Alex Rizkowsky, Azade Nova,
Ben Adlam, Bernd Bohnet, Gamaleldin F. Elsayed, Hanie Sedghi, Igor Mordatch, Isabelle Simpson, Izzeddin
Gur, Jasper Snoek, Jeffrey Pennington, Jiri Hron, Kathleen Kenealy, Kevin Swersky, Kshiteej Mahajan, Laura
Culp, Lechao Xiao, Maxwell L. Bileschi, Noah Constant, Roman Novak, Rosanne Liu, Tris Warkentin, Yundi
[Qian, Yamini Bansal, Ethan Dyer, Behnam Neyshabur, Jascha Sohl-Dickstein, and Noah Fiedel. 2023. Beyond](https://doi.org/10.48550/ARXIV.2312.06585)
[human data: Scaling self-training for problem-solving with language models. CoRR, abs/2312.06585.](https://doi.org/10.48550/ARXIV.2312.06585)
[55] Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, and Chuang Gan. 2024.
[Easy-to-hard generalization: Scalable alignment beyond human supervision. CoRR, abs/2403.09472.](https://doi.org/10.48550/ARXIV.2403.09472)
13
-----
[56] Yongqi Tong, Sizhe Wang, Dawei Li, Yifan Wang, Simeng Han, Zi Lin, Chengsong Huang, Jiaxin Huang, and
[Jingbo Shang. 2024. Optimizing language model’s reasoning abilities with weak supervision.](http://arxiv.org/abs/2405.04086)
[57] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov,
Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton-Ferrer, Moya
Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao,
Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas,
Viktor Kerkez, Madian Khabsa, Isabel Kloumann, Artem Korenev, Punit Singh Koura, Marie-Anne Lachaux,
Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar
Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan
Schelten, Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor,
Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie
Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas Scialom. 2023.´
[Llama 2: Open foundation and fine-tuned chat models. CoRR, abs/2307.09288.](https://doi.org/10.48550/ARXIV.2307.09288)
[[58] Gladys Tyen, Hassan Mansoor, Peter Chen, Tony Mak, and Victor Carbune. 2023. Llms cannot find reasoning](https://doi.org/10.48550/ARXIV.2311.08516)
[errors, but can correct them! CoRR, abs/2311.08516.](https://doi.org/10.48550/ARXIV.2311.08516)
[59] Peiyi Wang, Lei Li, Zhihong Shao, R. X. Xu, Damai Dai, Yifei Li, Deli Chen, Y. Wu, and Zhifang Sui. 2023a.
[Math-shepherd: Verify and reinforce llms step-by-step without human annotations. CoRR, abs/2312.08935.](https://doi.org/10.48550/ARXIV.2312.08935)
[60] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh
[Hajishirzi. 2023b. Self-instruct: Aligning language models with self-generated instructions. In Proceedings of](https://doi.org/10.18653/V1/2023.ACL-LONG.754)
_the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023,_
_Toronto, Canada, July 9-14, 2023, pages 13484–13508. Association for Computational Linguistics._
[[61] Zengzhi Wang, Rui Xia, and Pengfei Liu. 2023c. Generative AI for math: Part I - mathpile: A billion-token-scale](https://doi.org/10.48550/ARXIV.2312.17120)
[pretraining corpus for math. CoRR, abs/2312.17120.](https://doi.org/10.48550/ARXIV.2312.17120)
[62] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed H. Chi, Quoc V. Le,
[and Denny Zhou. 2022. Chain-of-thought prompting elicits reasoning in large language models. In Advances in](http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html)
_Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022,_
_NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022._
[[63] Ting Wu, Xuefeng Li, and Pengfei Liu. 2024. Progress or regress? self-improvement reversal in post-training.](http://arxiv.org/abs/2407.05013)
[[64] Shijie Xia, Xuefeng Li, Yixin Liu, Tongshuang Wu, and Pengfei Liu. 2024. Evaluating mathematical reasoning](https://doi.org/10.48550/ARXIV.2404.05692)
[beyond accuracy. CoRR, abs/2404.05692.](https://doi.org/10.48550/ARXIV.2404.05692)
[65] Yifan Xu, Xiao Liu, Xinghan Liu, Zhenyu Hou, Yueyan Li, Xiaohan Zhang, Zihan Wang, Aohan Zeng,
[Zhengxiao Du, Wenyi Zhao, Jie Tang, and Yuxiao Dong. 2024. Chatglm-math: Improving math problem-solving](https://doi.org/10.48550/ARXIV.2404.02893)
[in large language models with a self-critique pipeline. CoRR, abs/2404.02893.](https://doi.org/10.48550/ARXIV.2404.02893)
[[66] Fangxu Yu, Lai Jiang, Haoqiang Kang, Shibo Hao, and Lianhui Qin. 2024. Flow of reasoning: Efficient training](http://arxiv.org/abs/2406.05673)
[of llm policy with divergent thinking.](http://arxiv.org/abs/2406.05673)
[67] Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu, Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo Li,
[Adrian Weller, and Weiyang Liu. 2023. Metamath: Bootstrap your own mathematical questions for large language](https://doi.org/10.48550/ARXIV.2309.12284)
[models. CoRR, abs/2309.12284.](https://doi.org/10.48550/ARXIV.2309.12284)
[68] Lifan Yuan, Ganqu Cui, Hanbin Wang, Ning Ding, Xingyao Wang, Jia Deng, Boji Shan, Huimin Chen, Ruobing
[Xie, Yankai Lin, Zhenghao Liu, Bowen Zhou, Hao Peng, Zhiyuan Liu, and Maosong Sun. 2024. Advancing](https://doi.org/10.48550/ARXIV.2404.02078)
[LLM reasoning generalists with preference trees. CoRR, abs/2404.02078.](https://doi.org/10.48550/ARXIV.2404.02078)
[[69] Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Chuanqi Tan, and Chang Zhou. 2023. Scaling](https://doi.org/10.48550/ARXIV.2308.01825)
[relationship on learning mathematical reasoning with large language models. CoRR, abs/2308.01825.](https://doi.org/10.48550/ARXIV.2308.01825)
[[70] Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D. Goodman. 2022. Star: Bootstrapping reasoning with](http://papers.nips.cc/paper_files/paper/2022/hash/639a9a172c044fbb64175b5fad42e9a5-Abstract-Conference.html)
[reasoning. In Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information](http://papers.nips.cc/paper_files/paper/2022/hash/639a9a172c044fbb64175b5fad42e9a5-Abstract-Conference.html)
_Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - December 9, 2022._
[[71] Zhongshen Zeng, Pengguang Chen, Haiyun Jiang, and Jiaya Jia. 2023. Challenge llms to reason about reasoning:](https://doi.org/10.48550/ARXIV.2312.17080)
[A benchmark to unveil cognitive depth in llms. CoRR, abs/2312.17080.](https://doi.org/10.48550/ARXIV.2312.17080)
[72] Shengyu Zhang, Linfeng Dong, Xiaoya Li, Sen Zhang, Xiaofei Sun, Shuhe Wang, Jiwei Li, Runyi Hu, Tianwei
[Zhang, Fei Wu, and Guoyin Wang. 2023. Instruction tuning for large language models: A survey. CoRR,](https://doi.org/10.48550/ARXIV.2308.10792)
abs/2308.10792.
14
-----
[73] Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping
[Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy. 2023a. LIMA: less is](http://papers.nips.cc/paper_files/paper/2023/hash/ac662d74829e4407ce1d126477f4a03a-Abstract-Conference.html)
[more for alignment. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural](http://papers.nips.cc/paper_files/paper/2023/hash/ac662d74829e4407ce1d126477f4a03a-Abstract-Conference.html)
_Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023._
[74] Chunting Zhou, Pengfei Liu, Puxin Xu, Srinivasan Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping
Yu, Lili Yu, et al. 2024. Lima: Less is more for alignment. Advances in Neural Information Processing Systems,
36.
[75] Denny Zhou, Nathanael Scharli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire¨
[Cui, Olivier Bousquet, Quoc V. Le, and Ed H. Chi. 2023b. Least-to-most prompting enables complex reasoning](https://openreview.net/pdf?id=WZH7099tgfM)
[in large language models. In The Eleventh International Conference on Learning Representations, ICLR 2023,](https://openreview.net/pdf?id=WZH7099tgfM)
_Kigali, Rwanda, May 1-5, 2023. OpenReview.net._
15
-----
**A** **Appendix**
**A.1** **Dataset Details**
**A.1.1** **Dataset Construction**
For GSM8K, we evenly divide the original training dataset of 7,473 samples into two subsets, Dgold,1 and Dgold,2.
Additionally, we supplement both Dgold,1 and Dgold,2 with the data of the same distribution developed by (Chern
et al., 2023), until each contains 7,000 samples. Thus, the weak model uses Dgold,1, which includes both questions
and gold solutions, to obtain basic problem-solving capabilities. Meanwhile, the strong model can only access a
training dataset Q = {qi}, where qi ∈Dgold,2, consisting of 7,000 mathematical problems without ground truth
answers. GSM8K also includes 1,319 test samples.
For MATH, we employ the same subset of 500 representative problems as the test set, identical to that used
in Lightman et al. (2023). We then split the remaining 12,000 samples evenly between Dgold,1 and Dgold,2, each
containing 6,000 samples.
**A.1.2** **Statistics of MATH test set**
The distribution of difficulty levels across the 500 test
data samples in MATH is listed in Tab. 7.
**A.2** **Training Details**
|# L1 # L2 # L3 # L4 # L5|# Total|
|---|---|
|43 90 105 128 134|500|
|---|---|
Table 7: Data statistics of the MATH test set.
For supervised fine-tuning in Stage I, we adopt LoRA to fine-tune the strong model M with a learning rate of
1 × 10[−][4] and search for weight decay in the set {0, 0.01}. We run 2 epochs on GSM8K and 3 epochs on MATH,
with a batch size of 8. In Stage II, we employ two preference optimization methods. For DPO, we train the enhanced
strong model Mplus with a learning rate of 1 × 10[−][5] and run 1 epoch. For ORPO, we search for β in the set
_{0.1, 0.5, 1.0} with a learning rate of 3 × 10[−][5]_ and run 1 epoch. All experiments are conducted using A100 GPUs.
When constructing contrastive samples in Stage II, we sample n = 10 responses at temperature = 1.0, and use a
confidence threshold of τ = 0.6. Normally, we evaluate using greedy decoding. For calculating pass@k, we set
_k = 10 at temperature = 1.0._
**A.3** **Additional Analysis**
**A.3.1** **Diversity Analysis**
Mhybrid-ft[2] Micl-ft[2]
40% 36.92
30% 30.55
20% 19.56
15.77 [17.59] 15.62
Frequency (%) 13.72
10.46
10% 7.73 7.35 8.57
4.62 4.78
2.20 2.20
0% 0.99 0.99 0.23 0.15 0.00
1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10
Number of Clusters Number of Clusters
Figure 7: Frequency distribution of the number of distinct solutions on GSM8K supervised by Llama2-7b.
To investigate why Mhybrid-ft achieves high pass@k scores despite lower greedy decoding results, we explore the
diversity of responses generated by Mhybrid-ft and Micl-ft. We specifically examine the frequency distribution of the
number of distinct solutions for each question across the two strong model checkpoints.
Given a question from Dgold,2, we sample n = 10 responses at temperature = 1.0 for each checkpoint. We
consider two responses distinct if their ROUGE-L similarity is less than 0.7. We then compute the number of
clusters formed by these distinct responses and plot their frequency distribution in Fig. 7.
As shown in Fig. 7, Micl-ft[2] [tends to produce nearly the same sampled responses for each question in more than]
36% of the instances. This indicates a limited exploration of problem-solving paths and difficulty in generating
diverse, correct solutions during the sampling process. In contrast, Mhybrid-ft[2] [generates a variety of responses,]
increasing its hit rate with multiple sampling and thus achieving higher pass@k scores. Additionally, diverse
solutions are crucial for robust outcomes and model generalization (Yu et al., 2024; Wu et al., 2024). In Stage II,
16
-----
diverse solutions also ensure the distinction between positive and negative samples, demonstrating the rationale for
selecting Mhybrid-ft[2] [for preference optimization in Stage II.]
**A.3.2** **Training Accuracy of Stage I**
Tab. 8 presents the final answer accuracy and processlevel accuracy for both weak data and icl data utilized
in the initial round.[5] To compute process-level accuracy, we randomly sample a maximum of 1,000
training sample from each of weak data and icl data,
and evaluate them using GPT-4o following Xia et al.
(2024); Zeng et al. (2023), the prompt we use is illustrated in Tab. 13. Accuracy at this level is determined
strictly on the basis that there are no errors throughout
the intermediate reasoning steps.
From the results we can see that despite having
consistent final answer accuracy (with the exceptions
of Gemma-2b and Mistral-7b on MATH using augmented training data), there are noticeable differences
in process-level performance, leading to variations in
the effectiveness of Mweak-ft and Micl-ft. Moreover, it
is counterintuitive that models trained on icl data with
relatively low process-level accuracy achieve higher
performance. This might be because the models prefer self-generated solutions and can more effectively
learn those that better align with their inherent distribution (Panickssery et al., 2024; Ren et al., 2024; Fan
et al., 2024).
**A.4** **Additional Experiments**
**Greedy Decoding** **Pass@k**
**GSM8K**
weak-ft 57.47 77.26
_M[2]_
Llama2-7b icl-ft **63.76** 81.05
_M[2]_
hybrid-ft 62.62 **86.28**
_M[2]_
weak-ft 45.03 71.49
_M[2]_
Gemma-2b icl-ft **60.12** 80.14
_M[2]_
hybrid-ft 56.03 **85.14**
_M[2]_
weak-ft 66.72 85.67
_M[2]_
Mistral-7b icl-ft 66.64 84.08
_M[2]_
hybrid-ft **68.39** **88.70**
_M[2]_
**MATH**
weak-ft 10.80 34.80
_M[1]_
Llama2-7b icl-ft 11.80 **35.00**
_M[1]_
hybrid-ft **14.00** 33.60
_M[1]_
weak-ft **14.80** 38.80
_M[1]_
Gemma-2b icl-ft 13.60 33.60
_M[1]_
hybrid-ft **14.80** **39.60**
_M[1]_
weak-ft 10.80 34.20
_M[1]_
Mistral-7b icl-ft **15.60** 31.60
_M[1]_
hybrid-ft 14.20 **38.40**
_M[1]_
**Final Answer** **Process-Level**
**GSM8K**
Llama2-7b _Dˆˆiclweak[1]_ 89.8289.82 72.5076.50
_D[1]_
Gemma-2b _Dˆˆiclweak[1]_ 87.9787.97 73.1073.80
_D[1]_
Mistral-7b _Dˆˆiclweak[1]_ 92.3892.38 80.1077.90
_D[1]_
**MATH**
Llama2-7b _DDˆˆweakicl_ 46.1146.11 32.0439.22
Gemma-2b _DDˆˆweakicl_ 30.4031.90 26.3029.90
Mistral-7b _DDˆˆweakicl_ 24.7525.25 21.5025.60
Table 8: Training accuracy of Stage I.
**Test Acc.** **# Training Data**
**Gemma-2b**
SFT on Full Weak 10.00 6,000
SFT on Gold Weak 15.60 644
weak-ft 11.00 448
_M[1]_
icl-ft 11.40 448
_M[1]_
hybrid-ft 13.20 448 2
_M[1]_ _×_
**Mistral-7b**
SFT on Full Weak 14.40 6,000
SFT on Gold Weak 16.60 861
weak-ft 12.40 584
_M[1]_
icl-ft 15.60 584
_M[1]_
hybrid-ft 14.20 584 2
_M[1]_ _×_
Table 10: Stage I results on MATH without augmenting
training data. “Test Acc.” refers to Test Accuracy.
**Weak Model** **Full Weak FT** **Weak-ICL FT**
**GSM8K**
Llama2-7b 22.47 78.53
Gemma-2b 8.27 75.71
Mistral-7b 14.63 71.38
**MATH**
Llama2-7b 10.45 71.64
Gemma-2b -25.81 64.52
Mistral-7b 19.05 28.57
Table 11: Performance Gap Recovered (PGR) in Stage I.
Table 9: Greedy decoding and pass@k results (k = 10
and temperature = 1.0) for the three variants of enhanced
strong models obtained through weak-icl fine-tuning. The
best results are in bold.
5The relatively low accuracy observed in MATH explains why we choose to perform one round of iteration.
17
-----
**A.4.1** **Details of Stage I on MATH**
In the Stage I experiment conducted on the MATH dataset, it is found that the amount of training data selected
via final answer consistency is so limited that the strong model can hardly learn the effective features through
supervised fine-tuning. To address this, we randomly sample additional inconsistent data. Based on the weak
model’s performance (Llama-7bDˆicl) to 1,000 instances for Gemma-2b and 2,000 instances for Mistral-7b, and present the results in Fig. < Gemma-2b < Mistral-7b on MATH), we supplement the data (both _D[ˆ]weak 4. The and_
original amount of training data and test accuracy for these two weak models are shown in Tab. 10.
**A.4.2** **Pass@k Results**
Tab. 9 summarizes the greedy decoding and pass@k results for the three variants of enhanced strong models obtained
through weak-icl fine-tuning. Notably, Mhybrid-ft utilizes a training set that combines those used by Mweak-ft and
_Micl-ft. The results indicate that Mhybrid-ft outperforms its counterparts in terms of pass@k, achieving superior_
pass@k scores with margins of up to 5.23 points. The only exception occurs in the MATH dataset supervised by
Llama2-7b, where the underperformance is likely due to limited training data.
The superior performance of Mhybrid-ft can be attributed to the diversity of solutions in its training set (verified in
§A.3.1), validating our approach of adopting the final iteration of Mhybrid-ft from Stage I for preference optimization
in Stage II. It is important to note that while higher pass@k scores suggest greater potential, the true challenge
lies in effectively harnessing this potential, particularly in the weak-to-strong setting where no ground truths are
available. Our proposed weak-to-strong preference optimization in Stage II successfully addresses this challenge,
transforming theoretical potential into tangible performance gains in greedy decoding, as proved in §4.4.
**A.4.3** **PGR of Stage I**
Burns et al. (2023) propose a new metric called performance gap recovered (PGR) to measure the fraction of the
performance gap that can be recovered through weak supervision, as illustrated in Eq. 1. Tab. 11 displays the results
of the naive full weak fine-tuning (i.e., Full Weak FT) and our best weak-icl fine-tuning (i.e., Weak-ICL FT) in terms
of PGR, which also demonstrate that our method can outperform the simple competitor. However, the variations in
PGR across different weak models do not provide meaningful insights. In the experiments described in the main
text, we use test accuracy instead to provide a more detailed depiction of model performance.
PGR = [weak-to-strong]strong ceiling −[ −]weak floor[weak floor] _[.]_ (1)
**A.4.4** **Effect of SFT Data**
Tab. 12 presents more detailed comparative experimental results of Stage I on GSM8K. “Full Weak” denotes full weak fine-tuning, “Our Weak” is equivalent
to Mweak-ft[1] [, and “Our ICL” is equivalent to][ M][1]icl-ft[.]
“Gold Weak” refers to the scenario where weak data
with correct final answers are filtered and used for
supervised fine-tuning, which is impossible in the
weak-to-strong setting and just used for experimental
analysis. Similarly, “Gold ICL” refers to the scenario
where solutions with correct final answers, generated
by the strong model via weak ICL, are filtered.
Compared to using a large volume of noisy data
(i.e., Full Weak and Full ICL), reducing the data quantity while enhancing data quality can significantly
improve the accuracy of the trained model, with potential gains over 17 points. Although our method
performs slightly lower than the gold results, it proves
highly effective and stable in scenarios where obtaining the ground truth is impossible.
**Weak Model** **SFT Data** **Test Accuracy**
Full Weak 42.38
Gold Weak 54.21 (+11.83)
Our Weak 53.68 (+11.30)
Full ICL 59.14
Gold ICL 64.29 (+5.15)
Our ICL 61.71 (+2.57)
Full Weak 29.04
Gold Weak 46.40 (+17.36)
Our Weak 42.91 (+13.87)
Full ICL 58.61
Gold ICL 63.86 (+5.25)
Our ICL 59.21 (+0.60)
Full Weak 61.33
Gold Weak 67.55 (+6.22)
Our Weak 65.96 (+4.63)
Full ICL 62.32
Gold ICL 66.64 (+4.32)
Our ICL 65.43 (+3.11)
Llama2-7b
Gemma-2b
Mistral-7b
Table 12: Detailed results of Stage I on GSM8K.
18
-----
Question:
_{question}_
Student Solution:
_{solution}_
Your task involves three parts:
1. **Step-by-step Evaluation:** Go through the student solution carefully and identify key errors and potential misunderstandings
that led to the incorrect solution.
2. **Final Judgement:** Provide an overall judgement on the correctness of the student’s solution.
3. **First Error Step:** If the solution is incorrect, generate the step number where the first error occurs, otherwise generate N/A
here.
Here’s the format I want:
Step-by-step Evaluation: [Provide a step by step examination of the student solution and identify key errors and misunderstandings
here.]
Final Judgement: [Insert only **correct** or **wrong** here]
First Error Step: [Insert either N/A or the step number where the first error occurs]
Please follow this format without any additional introductory or concluding statements.
Table 13: Prompt used to evaluate process-level accuracy.
19
-----
| [
"Yan, Ma",
"Yuqing, Yang",
"Pengfei, Liu"
] | 2024-07-18T00:00:00 | null | false | 1 | 0 | null | http://arxiv.org/abs/2407.13647 | https://arxiv.org/abs/2407.13647 | https://www.semanticscholar.org/paper/5b02900dd920a4c663dbe96023c2f1ac6e485276 |
$\texttt{LM}^\texttt{2}$: A Simple Society of Language Models Solves Complex Reasoning | Despite demonstrating emergent reasoning abilities, Large Language Models (LLMS) often lose track of complex, multi-step reasoning. Existing studies show that providing guidance via decomposing the original question into multiple subproblems elicits more robustness in LLM reasoning -- a decomposer generates the subproblems, and a solver solves each of these subproblems. However, these techniques fail to accommodate coordination between the decomposer and the solver modules (either in a single model or different specialized ones) -- the decomposer does not keep track of the ability of the solver to follow the decomposed reasoning. In this paper, we propose LM2 to address these challenges. LM2 modularizes the decomposition, solution, and verification into three different language models. The decomposer module identifies the key concepts necessary to solve the problem and generates step-by-step subquestions according to the reasoning requirement. The solver model generates the solution to the subproblems that are then checked by the verifier module; depending upon the feedback from the verifier, the reasoning context is constructed using the subproblems and the solutions. These models are trained to coordinate using policy learning. Exhaustive experimentation suggests the superiority of LM2 over existing methods on in- and out-domain reasoning problems, outperforming the best baselines by $8.1\%$ on MATH, $7.71\%$ on JEEBench, and $9.7\%$ on MedQA problems (code available at https://github.com/LCS2-IIITD/Language_Model_Multiplex). | null | ## LM[2]: A Simple Society of Language Models Solves Complex Reasoning
**Gurusha Juneja[*]**
Microsoft Research, India
[email protected]
**Abstract**
**Subhabrata Dutta[*]**
IIT Delhi, India
[email protected]
**Tanmoy Chakraborty**
IIT Delhi, India
[email protected]
size like GPT-4 (OpenAI, 2023), or (ii) finetuning
a relatively smaller LLM using domain-focused
data (Shao et al., 2024; Toshniwal et al., 2024;
Dutta et al., 2024). Methods from the former category heavily rely on the proprietary LLM being
used and are prone to fail absolutely when employed with less powerful models. The latter category, though cost-effective compared to humongous LLMs, often loses in generalizability due to a
narrow training domain.
**The chronicle of decomposed reasoning. A**
number of recent literature has pointed out that
LLMs tend to perform better on complex reasoning tasks when the problem is decomposed into
step-by-step subproblems (Zhou et al., 2023; Khattab et al., 2022; Juneja et al., 2023). Earlier techniques demonstrated the superiority by providing
the model with examples containing the original
problem decomposed into multiple sub-problems
along with their answers (Zhou et al., 2023). However, Juneja et al. (2023) illustrated that decoupling
the decomposer from the solver by finetuning a separate decomposer language model (LM) to coordinate with a larger solver LM is beneficial to simply
prompting a single monolithic LM to decompose
and solve. Echoing their findings, Wu et al. (2024)
also found that distilling decomposition abilities
from a larger LM to a smaller LM is much more
generalizable compared to decomposing the solver
abilities directly.
**Our contributions. However, a major bottle-**
neck in existing methods of decomposer finetuning
is the lack of tightness between the decomposersolver interactions. Typically, the decomposition is
done in a memoryless manner, with or without the
solver’s initial response; no strategy is employed
to track whether the solver can follow the decomposed chain of reasoning. Towards this very end,
we propose a novel multi-LLM coordination framework, Language Model Multiplex (LM[2]). LM[2] is
built upon three separate LMs, each dedicated to
Despite demonstrating emergent reasoning abilities, Large Language Models (LLMS) often
lose track of complex, multi-step reasoning. Existing studies show that providing guidance via
decomposing the original question into multiple subproblems elicits more robustness in
LLM reasoning – a decomposer generates the
subproblems, and a solver solves each of these
subproblems. However, these techniques fail
to accommodate coordination between the decomposer and the solver modules (either in
a single model or different specialized ones)
– the decomposer does not keep track of the
ability of the solver to follow the decomposed
reasoning. In this paper, we propose LM[2] to
address these challenges. LM[2] modularizes
the decomposition, solution, and verification
into three different language models. The decomposer module identifies the key concepts
necessary to solve the problem and generates
step-by-step subquestions according to the reasoning requirement. The solver model generates the solution to the subproblems that are
then checked by the verifier module; depending upon the feedback from the verifier, the
reasoning context is constructed using the subproblems and the solutions. These models are
trained to coordinate using policy learning. Exhaustive experimentation suggests the superiority of LM[2] over existing methods on in- and
out-domain reasoning problems, outperforming
the best baselines by 8.1% on MATH, 7.71%
on JEEBench, and 9.7% on MedQA prob[lems (code available at https://github.com/](https://github.com/LCS2-IIITD/Language_Model_Multiplex)
[LCS2-IIITD/Language_Model_Multiplex).](https://github.com/LCS2-IIITD/Language_Model_Multiplex)
**1** **Introduction**
Recent trends in solving complex reasoning tasks
using Large Language Models (LLMs) typically
follow two different dominant approaches: (i)
well-curated prompting techniques (Zheng et al.,
2023; Yao et al., 2024) on LLMs of exorbitant
*Equal contribution
-----
Concepts:
1. Triangle Inequality
2. Arithmetic progression
Related Equations
1. Let x,y,z be sides of triangle, then x+y>z, x+z>y, z+y>x
2. If x,y,z are in arithmetic progression then y-x=z-y
|: How many distinct, non-equilateral triangles with a perimeter of|Col2|Col3|
|---|---|---|
|: How many distinct, non-equilateral triangles with a perimeter of 60 units have integer side lengths,, and such that,, is an arithmetic sequence? 1. We know that for a triangle with side lengths a, b, and c, the sum of the lengths of any two sides must be greater than the length of the third side. 2. Since we are given that a, b, and c form an arithmetic sequence, we can express b as the average of a and c. This gives us . ... 6. Since a, b, and c form an arithmetic sequence, we can express c in terms of a as c = a + d, where d is the common difference. ... 10. This gives us the side lengths of the triangle as a = 10, b = 15, and c = 30. Hence final answer is Let be the common difference, so and We can assume that is positive In particular, can't be 0, because the triangle is not equilateral Then the perimeter of the triangle is, so Hence, the sides of the triangle are, 20, and These sides must satisfy the triangle inequality, which gives us Solving for, we find, or Therefore, the possible values of are 1, 2,, 9, which gives us possible triangles|Solver LM||
||||
||||
||||
||Verifier LM||
||||
.
SQ: What is a, b, c in terms of common difference d?
SA:Since a, b, and c form an arithmetic sequence, we can
Decomposer LM express c in terms of a as c = a + d, where d is the common
difference
SQ:What is the value of a + c?
SA:Since we are given that a, b, and c form an arithmetic
sequence, we can express b as the average of a and c. This
gives us . The perimeter of the triangle is given
by P = .
We are given that the perimeter is 60 units,
so we have .Concepts:
Solving for a + c,
Verifier LM
we get a + c = 40
, or SQ: What is triangle inequality in terms of a,b,c?
, 9, which gives us SA: Triangle inequality sayd
Figure 1: The inference procedure of LM[2] on a question from the MATH dataset. A question (in blue) is provided to
the Solver LM that produces an incorrect answer (in red). The question is then provided to the Decomposer LM that
generates the concepts and step-by-step subquestions (in lilac). Each subquestion is answered by the Solver LM, and
the sub-answer is verified by a Verifier LM. If the Verifier LM approves the sub-answer, that subqustion-subanswer
pair is added to the context of reasoning steps; otherwise, a new subquestion is generated. The question, concepts,
subquestions, and subanswers are provided in context to the Decomposer LM to generate the next subquestion.
Finally, the question, concepts, subquestions, and subanswers are provided to the Solver LM to generate the final
answer (in green).
three different components of complex multistep
reasoning – a solver LM is responsible for answering questions; a verifier LM provides feedback on
the correctness of the output from the solver, and
a decomposer LM identifies the basic concepts
required to solve the problem and generates stepby-step subproblems by decomposing the original
question (see Figure 1 for a working example). Unlike prior approaches, the decomposer in LM[2] generates each subproblem depending on the solver’s
answers to prior subproblems, along with the verifier’s feedback on those answers. Furthermore, the
decomposer generates the conceptual requirements
to solve the problem, which further streamlines
the solver LM. Irrespective of the complexity of
the underlying reasoning, the world knowledge required to answer any question is typically better
preserved in larger, proprietary LMs. Considering
this, we use GPT-3.5 (text-davinci-003) as the
solver without finetuning. For both the decomposer
and verifier, we implement parameter-efficient finetuning (Hu et al., 2022) of LLaMA-2 (13 billion
parameters) separately. First, these models are finetuned separately towards the tasks of decomposition and verification using datasets annotated by
GPT-4. The decomposer is then taught to coordinate with the solver and the verifier models in a
policy learning setup. LM[2] achieves promising performance across a diverse set of reasoning tasks.
On the MATH dataset of mathematical reasoning,
LM[2] outperforms the best decomposer-tuning baseline by a staggering margin 8.1% of absolute accuracy on average. Although LM[2] uses the training
split of the MATH dataset for tuning the decomposer and the solver, it seamlessly generalizes to
out-of-distribution tasks in MedQA and JEEBench,
outperforming the best competitive baseline with
9.7 % and 7.71% difference on absolute accuracy
respectively.
Beyond the discourse of overall numbers, we
perform in-depth ablation analyses to identify the
roles of each component of the model. We observe that (i) the verifier LM and concept generated
by the decomposer LM play a crucial role in generalizing out-of-distribution reasoning tasks like
MedQA, JEEBench Chemistry, etc.; (ii) finetuning the decomposer is crucial for better concept
identification – finetuned LLaMA-2 7B generates
more effective conceptual requirements compared
to even GPT-4; (iii) even while not using all the
-----
modular components of LM[2], the prompt template
of structured reasoning boosts the performance of
GPT-4.
**2** **Related Work**
The efficacy of explicitly generating intermediate
reasoning steps over direct generation of the required answer was first demonstrated by Nye et al.
(2021). Chain-of-thought prompting (Wei et al.,
2022) generalized the scratchpad learning of Nye
et al. (2021) into an in-context learning regime
using LLMs. Chain-of-thought and its successors (Chen et al., 2022; Yao et al., 2024) typically
let the decomposition of a composite, multi-step
reasoning problem remain implicit in the LLM.
Zhou et al. (2023) demonstrated that instead,
an explicit call to the LLM to generate multiple
smaller problems that are steps to answer the original query achieves more robust reasoning. Their
proposed method, Least-to-Most prompting, uses
these simpler subproblems and their answers as
the context to solve the original problem. Similarly, Khot et al. (2023) proposed a promptingbased problem decomposition approach where the
LLM is asked to decompose a complex task using
few-shot examples. However, this still burdens a
single language model in handling both decomposition and solution. Juneja et al. (2023) circumvented
this challenge by distilling the decomposition abilities into a relatively smaller language model. Their
proposed method, DaSLaM, utilizes two separate
language models that coordinate with each other to
solve complex reasoning problems. Their findings
suggest that finetuning the decomposer is more generalizable than finetuning the solver model. This
has been further supported by Wu et al. (2024)
recently. Tarasov and Shridhar (2024) explored
the distillation of decomposition abilities via offline reinforcement learning. Khattab et al. (2022)
proposed a programmatic retrieval augmentation
framework, namely Demonstrate-Search-Predict
(DSP), for knowledge-intensive generation tasks.
DSP relies on the coordination between a generative LM and a retrieval model through sophisticated
programs. Recent attempts have been made to incorporate dense verifiers (typically, a finetuned,
bidirectional language model acting as a classifier)
aiding a generative model towards robust, verifiable
problem solving and text generation (Cobbe et al.,
2021; Sun et al., 2023). Different techniques for
verification of LM-generated outputs have been pro
posed subsequently, such as self-verification (Weng
et al., 2023), majority voting (Li et al., 2023), etc.
**3** **Methodology**
Our proposed method, LM[2], is built upon the coordination of multiple LMs to perform reasoning
in a modular fashion. However, such coordination
is not implicit in the pertaining stage of a model;
instead, we seek to inculcate this ability via finetuning (parts of) the LM multiplex. To this end, LM[2] is
built upon three functional components: a (preferably larger) solver model, a decomposer model,
and a verifier model.
For fine-grained control over the function of the
different components of LM[2], we make use of a
structured, step-by-step input-output framework
(see Figure 1). The role of each of the modules in
LM[2] is described as follows.
**3.1** **Decomposer**
The decomposer LM guides the solver LM to solve
a multi-step reasoning question in two ways. First,
it provides the solver model with a set of concepts
required to solve the problem. Second, it tells
the solver LM what is the next sub-question required to solve given the previous sub-questions
and their answers. More specifically, the decomposer LM is a function that can be defined as
_D(q,_ _si, sai_ _, c) : Q_ _S_ _SA_ _S, C_,
_{_ _}_ _×_ _×_ _→{_ _}_
where q represents the initial question to be solved,
_si, sai_ denotes the set of previous sub-questions
_{_ _}_
(si) and their corresponding answers (sai), and (c)
signifies whether the function needs to predict the
concept or the next sub-question. Q is the space of
all the questions, S is the space of all sub-questions,
_SA is the space of all sub-answers, and C is the_
space of all concepts.
**Supervised finetuning. The decomposer train-**
ing is performed in two stages similar to (Juneja
et al., 2023). The first stage is supervised finetuning, where the language model is finetuned on a
dataset prepared using GPT-4. To create the dataset,
we provided GPT-4 with a question and its gold
reasoning. It was then asked to first generate all the
concepts required to solve the question, followed
by sub-questions and sub-answers. Only the questions that were answered correctly were included
in the dataset. Each sample in the dataset can
be expressed as a tuple {Q, c, {si, sai}i[n]=1[, s][n][+1][}][,]
where sn+1 is the next sub-question given the previous sub-questions and answers. The decomposer
-----
was then finetuned on the standard language modelling objective.
**Policy optimization. With the supervised fine-**
tuning step, the decomposer LM is conditioned to
respond to reasoning problems with concepts and
decomposed subquestions. However, it is still not
able to take the feedback from the solver and the
verifier models into account. To this end, we utilize Proximal Policy Optimization (Schulman et al.,
2017) with the decomposer as the policy and the
solver and the verifier model as a black-box environment. Precisely, we compute different types
of rewards utilizing the feedback from the verifier
model that takes the solver model’s response into
account at each step and provides the decomposer
with necessary refinement signals.
**3.2** **Verifier**
Given the complexity of multistep reasoning, we
need the verifier to be able to provide nuanced feedback to the decomposer on the possible mistakes
made by the solver; a binary correct/incorrect message as employed by prior works with verifiers (Li
et al., 2023; Weng et al., 2023) will limit the decomposer model’s scope of vision. For fine-grained
control, the verifier is finetuned on a supervised
dataset containing a question, an answer with an
error made in the correct answer, a classification
for the type of error, and an explanation for the
classification. The verifier classifies the given input
into nine classes as follows: 1 Conceptual mis
1
takes, 2 Computational mistakes, 3 Procedural
mistakes, 4 Misunderstood question, 5 Mistake
2
4
in the first step, 6 Mistake in first half, 7 Mistake
3
5
7
in second half, 8 Mistake in last step, and 9 No
6
8
9
mistake. The dataset was produced using GPT-4,
asking it to generate an explanation for the classification given the correct solution, wrong solution
and the classification. The verifier is finetuned to
generate the explanation and the classification (see
Section 3.3 for examples of each type of error message and explanation).
**3.3** **Training with Decomposer Feedback**
The training dataset curated for the decomposer
LM consists of only the correct answers; hence,
the decomposer is blind to the possible errors that
the language model can make. In order to make
the decomposer generate meaningful questions, we
further finetune the decomposer while working in
synergy with the solver language model using Policy gradient methods.
**Environment. The environment consists of a**
black-box solver model Θ. The model Θ generates an answer to the current question given the
concepts and previous questions and their answers.
**Policy, action and state space. The decomposer**
language model ϕ comprises the policy network. A
state s in the state space S is defined by the concatenation of the initial state s0 and all the actions
taken from the initial state to the current state. The
initial state s0 is defined as the initial question Q.
The action space is defined as the token space of
the language model ϕ. Hence, a state sn can be
represented as (s0, {ai}i[n]=1[)][, where][ a][i][ is the action]
taken at the ith time step.
**Reward function. The reward is based on the**
feedback given by the verifier at each sub-question
produced by the decomposer. The reward structure
is intuitively designed to impose penalties for errors
occurring in earlier sub-questions relative to those
occurring in later ones. This is because fixing an
early mistake can significantly increase the chances
of the question being correct. Further, the policy
is penalised more for conceptual and procedural
mistakes as compared to computational mistakes.
We construct the reward function for the k[th] subquestion as follows:
_R = γ[k]_
_ri_ (1)
_i=1_
X
where γ < 1 is the discount factor responsible
for imposing more penalties on the earlier generations. ri are the rewards for individual feedback
given by the verifier as defined below (for each
type of reward, we provide an example question
asked by the decomposer, an erroneous answer to
that question by the solver, type of error identified
and the explanation generated by the verifier in red
textboxes).
_Conceptual correctness reward is defined as,_
_r1 = −0.15I[V (sk, sak) = 1]_ (2)
where I is the indicator function, V is the verifier
that takes in input the k[th] sub-question (sk) and its
answer produced by the solver (sak) and outputs
the category of mistake. This reward accounts for
any mistake made by the solver in understanding
or while applying a concept incorrectly.
-----
the sub-question was either incoherent with the previous questions or was too complex for the model
to answer. This kind of mistake is important to
address and, hence, is given a higher weight.
**Q : How many distinct values of a, b, c are possible?**
**A : This gives us the side lengths of the triangle as a = 10,**
_b = 15, and c = 30._
**Verifier : Mistakes Understanding Question: The model has**
made a mistake by not giving the number of distinct values.
**Reward based on place of mistake. As dis-**
cussed above, later mistakes are penalised less than
the earlier ones. Hence, if a mistake is made in the
first step, it is given a reward of −0.2. If the model
makes a mistake in the first half of the sub-answer,
it is given a reward of −0.12. For a mistake in the
last half of the sub-answer, it is given a reward of
_−0.08. If the mistake is made in the last step, it is_
given a reward of −0.05.
**No-mistake rseward is the case when the model**
has not made any mistake in answering the subquestion and is given a positive reward of +1.
**3.4** **Inference**
During the inference, the decomposer, solver, and
verifier models work together to answer a given
question (see working example in Figure 1). During the inference, the decomposer first produces
a list of concepts required to solve the question.
Then, given the question and concepts as context,
the decomposer produces a sub-question. The subquestion is answered by the solver. Now, given the
sub-question and sub-answer, the verifier provides
feedback in the form of a multi-class classification
into the above-described classes of mistakes. If the
feedback provided by the verifier consists of either
a conceptual mistake, procedural mistake, mistake
in understanding or mistake in the first step, we
again generate the sub-question.
**4** **Experiments**
For all the experiments, LM[2] uses the OpenAI
text-davinci-003 model (hereafter mentioned
as GPT-3.5) as the solver and LLaMA-2 13B (Touvron et al., 2023) as the base models for the decomposer and the verifier.
**4.1** **Training data curation**
For the first stage of finetuning of the decomposer
LM, we curated a dataset of 15, 396 question, concept, sub-question, sub-answer tuples. The questions were taken from the train split of the MATH
**Q: What is a, b, c in terms of common difference d?**
**A :Since a, b, and c form an arithmetic sequence, we can express**
_c in terms of a as c = a + d, where d is the common difference._
**Verifier: Conceptual Mistakes: The student misunderstood the**
concept of finding the possible values of the common difference in
an arithmetic sequence to form a triangle with a given perimeter.
_Computational correctness reward is defined as,_
_r2 = −0.05I[V (sk, sak) = 2]_ (3)
This reward accounts for cases where the model
has made any mistakes in calculation or formula
application. These kinds of mistakes are weighted
less compared to all other types because an external
tool can be easily used for these mechanical tasks.
**Q : What is the value of a + c?**
**A : Since we are given that a, b, and c form an arithmetic**
sequence, we can express b as the average of a and c. This
gives us b = _a+2_ _c_ [. The perimeter of the triangle is given by]
_P = a + b + c = a +_ _[a][+]2_ _[c]_ + c = [3][a][+3]2 _[c]_ . We are given that the
perimeter is 60 units, so we have 60 = [3][a][+3]2 _[c]_ . Solving for a + c,
we get a + c = 20
**Verifier : Calculation Mistakes: The model has made a calcula-**
tion mistake while calculating the value of a + c. a + c should be
40.
_Procedural correctness reward is defined as,_
_r3 = −0.15I[V (sk, sak) = 3]_ (4)
This encompasses any mistake made by the model
while following a step-by-step procedure whose information is already given in-context while solving
the problem.
**Q : What are the possible values of a, d?**
**A : Substituting this into the equation a + c = 20, we get**
_a + (a + d) = 20, which simplifies to 2a + d = 20 −_ _a. Since_
the side lengths must be integers, a and d must be integers. We
can then try different values of a and d that satisfy the equation
2a + d = 20 − _a. After trying different values, we find that the_
only solution that gives integer side lengths is a = 10 and d = 20.
**Verifier : Procedural Mistakes: The student made a mistake in**
the procedure of solving the problem by not accurately finding the
possible values of the common difference.
_Question understanding reward is defined as,_
_r4 = −0.2I[V (sk, sak) = 4]_ (5)
This reward accounts for the cases when the model
has made a mistake in understanding the subquestion. Either it has not answered the quantity
asked in the sub-question. This gives a signal that
-----
|Dataset|Method|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||CoT|L2M|PHP|DSP|DaSLaM|LM2|
|PnC NT ALG I-ALG Calc. P-ALG Geom.|16.4 14.4 27.6 16.4 14.0 32.3 14.2|16.0 11.0 22.4 16.8 14.58 28.0 12.5|10.2 9.8 24.0 10.0 14.28 26.5 14.0|16.2 20.3 15.3 17.0 18.8 28.0 5.2|21.4 26.1 33.4 24.8 18.2 44.0 21.4|30.0 41.0 34.0 27.8 34.0 47.0 32.0|
|MedQA|50.3|49.8|47.5|52.3|50.1|57.1|
Table 1: Performance comparison of LM[2] with the baselines on MATH and MedQA datasets using GPT-3.5 as
the solver LM.
**4.5** **Ablation Study**
In our investigation, we perform five types of ablation studies aimed at comprehensively understanding the significance of each component within the
LM[2] pipeline.
We start with investigating the relevance of the
verifier by conducting an experiment where we remove it entirely (LM[2]\V). Here, we accept each
question generated by the decomposer during the
inference process without any verification. Then,
we explore the role of concepts within the pipeline.
Here, we alter the approach by instructing the decomposer to directly generate sub-questions, without providing the concepts to the Solver LM during
the answer generation phase (LM[2]\C). Following
this, we investigate the incremental gains achieved
through the second stage of finetuning via policy
learning. To accomplish this, we analyze the performance of the decomposer checkpoint after the
initial stage of fine-tuning, referred to as (LM[2]\RL).
To assess the impact of different types of rewards
provided, we partition the rewards into two distinct
categories: i) based on the type of mistake, which
encompasses conceptual, computational, procedural, and question understanding correctness, and
ii) based on the position of mistake. Subsequently,
we come up with two ablation variants, finetuned
using each category of rewards: LM[2]-Type and LM[2]Position.
**5** **Results**
We summarize the performance of LM[2] along with
the baseline methods on the MATH and MedQA
datasets in Table 1 and on the JEEBench dataset in
Table 2. Across all the datasets, LM[2] improves upon
existing methods (using GPT-3.5 solver) by a huge
margin. It demonstrates an average 8% improvement on the MATH dataset and an average 2.5%
improvement on the JEEBench dataset as compared
dataset (Hendrycks et al., 2021). The questions
were taken from the MATH dataset. For verifier
LM finetuning, a dataset of 3, 674 question-answerclassification tuples was generated. Details of the
prompts used for each of these steps are provided
in Appendix A.
**4.2** **Training details**
We finetune LLaMA2-13B for both the decomposer
and verifier. We train for 8 epochs with a batch size
of 128, learning rate 2e-5, warmup steps of 100, a
Lora r value of 4, LoRA Alpha of 16 and dropout of
0.05. The models were trained in 8-bit quantization
on an 80G A100 GPU.
For the second stage of fine-tuning, we finetuned
the last 3 layers of LoRA adapters, using a batch
size of 16, gradient accumulation steps=4, init kl
coef=0.01, target=4. For inference, we used a temperature of 0 in all experiments for consistency of
results with a max output length of 2000.
**4.3** **Evaluation**
We evaluate our method on hard reasoning datasets
that require multi-step reasoning. These datasets
include MATH (Hendrycks et al., 2021) (test split),
JEEBench (Arora et al., 2023), and MedQA (Jin
et al., 2020) (English questions). The MATH
dataset contains math questions from challenging
math competitions, since it was also used for training, this shows our performance on in-domain questions. Next, we evaluate on the out-of-distribution
datasets like JEEBench which contains PCM questions extracted from the JEE Advanced exam and
MedQA which contains open-domain questions
from professional medical board exams. We only
evaluate questions in the English language.
**4.4** **Baseline Details**
We compare LM[2] with five existing methods:
Chain-of-thought prompting (CoT) (Wei et al.,
2022), Least-to-most prompting (L2M) (Zhou
et al., 2023), Progressive Hint Prompting
(PHP) (Zheng et al., 2023), Demonstrate-SearchPredict (DSP) (Khattab et al., 2022), and
**DaSLaM (Juneja et al., 2023). The original set-**
ting of PHP requires an 8-shot prompting; however,
since all other methods including LM[2] predict in the
zero-shot setting, we use PHP in 1-shot for a fairer
comparison.
-----
Dataset
Method
Phy. Math. Phy. Math. Phy. Math. Phy. Math. Chem. Chem. Chem. Chem.
MCQ MCQ Multi. Multi. Num. Num. Int. Int. Int. Num. Multi. MCQ
CoT 33.33 21.9 6.25 12.0 3.03 1.69 12.5 10.8 17.3 11.6 11.6 40.0
PHP 22.22 17.07 6.25 7.59 3.03 1.69 0* 4.0 11.7 9.7 12.2 37.5
L2M 22.22 21.9 6.25 12.5 3.03 3.38 10.0 10.8 13.0 9.7 10.0 20.0
DaSLaM 55.5 29.5 18.7 16.0 6.06 10.1 15.7 11.7 14.2 9.2 11.6 14.6
GPT4 **55.5** **34.1** **27.5** **21.5** 15.1 11.8 **22.7** **24.3** 17.9 **25.5** **48.3** **60.0**
LM[2] 51.85 30.18 26.8 16.4 **15.15** **13.1** 16.2 13.5 **26.0** 23.2 26.6 53.3
LM[2]\V 37.03 24.52 14.6 11.7 12.2 11.4 11.4 11.7 17.3 16.2 13.3 30.0
LM[2]\C 29.62 20.75 14.6 9.4 9.09 10.8 9.0 8.1 17.3 11.6 13.3 16.6
GPT4-C 29.62 28.3 14.6 11.5 15.15 11.4 9.0 11.4 21.7 23.2 33.33 30.0
LM[2]\RL 33.33 21.9 18.7 12.7 12.2 10.1 10.0 8.1 17.3 12.4 13.3 27.3
LM[2]-Type 46.1 28.0 20.3 14.0 13.4 11.4 15.0 13.5 24.0 23.2 23.6 45.4
LM[2]-Position 38.4 24.52 16.0 12.9 12.2 11.4 15.0 10.8 24.0 20.6 20.3 33.0
GPT35-SP 33.3 29.2 7.5 12.6 9.0 8.4 12.5 8.0 17.6 9.2 12.2 41.6
GPT4-SP 61.1 36.5 30.0 26.5 30.0 14.2 43.75 32.0 17.6 36.5 49.1 66.6
Table 2: Performance of LM[2] **on JEEBench Dataset along with baselines and ablation variants. (Top third) we**
highlight best and second best methods in boldface and underline. LM[2] generally outperforms all existing prompting
techniques with GPT-3.5 on different topics and different types of questions (other than Physics MCQ questions).
In 3/12 cases, LM[2] outperforms GPT-4. (Middle third) we observe a large drop in performance with each ablation
variant, pointing towards an efficient integration of these modules into LM[2] (see Section 4.5 for the description of
each variant). (Bottom third) Performance of the structured answer generation employed in LM[2], without decomposer
and verifier, using GPT-3.5 and GPT-4 as solvers.
to the best-performing baseline DaSLaM.
**Can it improve on out-of-domain tasks? In**
both DaSLaM and LM[2], the solver model is kept
frozen with the hope of retaining generalizability.
However, the decomposer model in both methods
(and the verifier in LM[2]) are finetuned using mathematical reasoning problems. This raises the question of the generalizability of these finetuned components over problems other than mathematical reasoning. One of the most significant challenges with
DaSLaM is that it is not able to perform well on
out-of-domain tasks like JEEBench Chemistry. We
find that our method can surpass this limitation as
can be seen in Tables 1 (MedQA) and 2 (JEEBench
Chemistry). While DaSLaM degrades the performance over CoT on MedQA, LM[2] achieves an absolute accuracy gain of 6.8 percentage points.
**How important is the verifier? Next, we seek**
to investigate the relative importance of each component in our pipeline. We observe that the accuracy decreases substantially upon removing the verifier model (LM[2]\V in the middle third of Table 2).
We can see that there is a drop of 13.0% in Chemistry versus 10.08% in Physics and 3.4% in Math
subsets. The relative drop in accuracy with the ablation of the verifier is sharper with multi-answer,
numeric, and integer answer questions. This makes
sense given the computational reasoning requirement is higher in these problems and the verifier
plays a crucial role in guiding the decomposer and
the solver along the correct reasoning path.
Figure 2: Comparison of token generation cost. We
depict the average number of tokens generated by the
solver model using different methods to solve the given
question averaged over 50 questions from the JEEBench
dataset.
**How important are the concepts? As can be**
seen from Table 2, removing concepts decreases the
accuracy of Physics subset by 11.6%, Maths subset
by 6.03%, and Chemistry subset by 17.5%. This
shows that concepts also play a very important role
in improving the performance on out-of-domain
datasets like Physics and Chemistry. Typically,
LM[2]\C fares worse than the rest of the ablation
variants, demonstrating that the concepts are the
most important component in LM[2].
**GPT-4 as concept generator? We also check**
-----
Figure 3: Comparison of GPT-4, DaSLaM and LM[2] on an example from MATH dataset.
how our decomposer compares to GPT-4 while
generating concepts. To compare this, we prompt
GPT-4 to generate concepts given the question. We
observe that there is an average decrease of 9.13%
when generating concepts using GPT-4 when compared to the Decomposer model, indicating the
higher quality of concepts generated as a result of
feedback-based fine-tuning.
**What is the effect of feedback-based finetun-**
**ing? The effect of feedback-based fine-tuning is**
evident when comparing the performance of the decomposer without the second stage of fine-tuning
alongside the verifier to that of LM[2]. On average,
we observe a notable decrease of 9.6% in performance when the second stage of fine-tuning is omit
ted. This finding highlights the significance of finetuning as a crucial step in optimizing model performance. However, the importance of concepts and
the verifier appears to outweigh that of fine-tuning.
This suggests that while fine-tuning contributes to
improved model performance, the incorporation of
concepts and a verifier into the model architecture
yields more substantial enhancements.
**How does the structured answering template**
**contribute? Recall that in LM[2], we introduce a**
novel, structured answering template for controllable coordination between the three models. It is
imperative to investigate the role of such a template alone behind the performance boost. We
make use of the template with two different solver
-----
models, GPT-3.5 and GPT-4. As we can see in
the bottom third of Table 2 (coined as modelnameSP), both models improve upon their base performance with our structured template. However, the
stronger GPT-4 model is able to utilize the template much more efficiently, with an average gain
of 7.8% across the JEEBench problems. Typically,
improvement on Physics problems is higher than
the Math problems, indicating that language models are not very good at retrieving physics concepts and solving the problem when using chainof-thought prompting. It should noted that while
the structured answering template alone is a powerful boost, it is much weaker alone without the
complete coordination in LM[2].
**Does guided reasoning help limit token us-**
**age? An important challenge with iteratively inter-**
acting with an LLM is the increased token usage
that will translate to expenses in either computational or monetary terms. In Figure 2, we plot
the average token usage (per problem) incurred
by the solver model (GPT-3.5) while using LM[2]
and DaSLaM against that of base chain-of-thought
generation. Note that we only show the token usage corresponding to the modified responses while
using LM[2] and DaSLaM. Both these methods originally use base CoT to generate the initial response
and therefore, their total token usage will always
be higher than that of CoT. However, the added
structure and guided reasoning significantly reduce
the token usage in the modified response. LM[2] prevails in this aspect too. A major reason behind
this is the step-by-step synergy between the decomposer, the solver, and the verifier in LM[2]. Since the
decomposer generates the subquestion depending
upon the response from the solver to the previous
subquestion, the chances of redundant generation
decrease, as opposed to DaSLaM where the subquestions are generated all at once.
**Example analysis. To further understand the nu-**
ances of LM[2], we perform an analysis of the generated output on an example from the MATH dataset
(see Figure 3). We compare between LM[2], DaSLaM
and GPT-4 with CoT. As we can see, GPT-4 makes
an incorrect interpretation of the question itself. It
assumes that the total journey after delay takes 10
hours, leading to an incorrect choice of option. The
subquestions produced by DaSLaM do not adhere
to the order of reasoning required to solve the problem and generate redundant questions. It starts with
asking What is the total distance to be covered?
However, in the second question, it asks for the
speed of the train which is already given in the
question itself. The 3rd subquestion generated by
DaSLaM is actually the original question, and the
solver makes a numerical mistake by simplifying
3d
the fraction 754 [to] 300d [instead of] 100d [. Without a ver-]
ifier, this erroneous response is integrated into the
reasoning context of the solver. In the next questions, the same problem is asked to be solved and
the solver continues to make incorrect responses.
With LM[2], we observe a much more well-defined,
crisp line of questioning by the decomposer model;
the solver is able to reach the correct answer option without regenerating the same information or
drawing incorrect subanswers.
**6** **Conclusion**
In this paper, we present LM[2], a cooperative cohort
of generative language models working together to
solve complex reasoning problems. LM[2] utilizes a
frozen solver model that is guided to solve reasoning problems by incrementally answering questions
framed by a decomposer model and checked by the
verifier model that is trained to coordinate with
each other. We find that LM[2] proves its supremacy
over existing methods over a variety of reasoning
tasks, both in-domain and out-domain. We find that
despite being trained using mathematical reasoning examples, our proposed structured response
scheme along with the fine-grained verification
strategy plays a crucial role in generalizing LM[2]
to heavily out-of-distribution tasks like medical
question answering and chemistry.
**Limitations.** Despite promising results, LM[2]
bears some inherent limitations. Compared to
purely prompting-based methods, it requires a certain computational overhead for the two-staged
training. With proprietary LLM-based solvers, LM[2]
incurs extra token usage over single-pass solutions
like chain-of-thought. Implicit limitations of the
solver model, like lack of length generalization,
arbitrary digit manipulation, etc. are expected to
be inherited in LM[2] as well. A possible future work
can be towards incorporating deterministic solvers
and tools into the multiplex.
**References**
Daman Arora, Himanshu Gaurav Singh, and Mausam.
[2023. Have llms advanced enough? a challenging](http://arxiv.org/abs/2305.15074)
[problem solving benchmark for large language mod-](http://arxiv.org/abs/2305.15074)
[els.](http://arxiv.org/abs/2305.15074)
-----
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems.](http://arxiv.org/abs/2110.14168)
Subhabrata Dutta, Ishan Pandey, Joykirat Singh, Sunny
Manchanda, Soumen Chakrabarti, and Tanmoy
Chakraborty. 2024. Frugal lms trained to invoke symbolic solvers achieve parameter-efficient arithmetic
reasoning. In Proceedings of the AAAI Conference
_on Artificial Intelligence, volume 38, pages 17951–_
17959.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the math dataset. NeurIPS.
Edward J Hu, yelong shen, Phillip Wallis, Zeyuan AllenZhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu
[Chen. 2022. LoRA: Low-rank adaptation of large](https://openreview.net/forum?id=nZeVKeeFYf9)
[language models. In International Conference on](https://openreview.net/forum?id=nZeVKeeFYf9)
_Learning Representations._
Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng,
[Hanyi Fang, and Peter Szolovits. 2020. What disease](http://arxiv.org/abs/2009.13081)
[does this patient have? a large-scale open domain](http://arxiv.org/abs/2009.13081)
[question answering dataset from medical exams.](http://arxiv.org/abs/2009.13081)
Gurusha Juneja, Subhabrata Dutta, Soumen Chakrabarti,
Sunny Manchanda, and Tanmoy Chakraborty. 2023.
[Small language models fine-tuned to coordinate](https://doi.org/10.18653/v1/2023.emnlp-main.225)
[larger language models improve complex reasoning.](https://doi.org/10.18653/v1/2023.emnlp-main.225)
In Proceedings of the 2023 Conference on Empiri_cal Methods in Natural Language Processing, pages_
3675–3691, Singapore. Association for Computational Linguistics.
Omar Khattab, Keshav Santhanam, Xiang Lisa
Li, David Hall, Percy Liang, Christopher Potts,
and Matei Zaharia. 2022. Demonstrate-searchpredict: Composing retrieval and language models for knowledge-intensive nlp. _arXiv preprint_
_arXiv:2212.14024._
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao
Fu, Kyle Richardson, Peter Clark, and Ashish Sab[harwal. 2023. Decomposed prompting: A modular](https://openreview.net/forum?id=_nGgzQjzaRy)
[approach for solving complex tasks. In The Eleventh](https://openreview.net/forum?id=_nGgzQjzaRy)
_International Conference on Learning Representa-_
_tions._
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2023. Making](http://arxiv.org/abs/2206.02336)
[large language models better reasoners with step-](http://arxiv.org/abs/2206.02336)
[aware verifier.](http://arxiv.org/abs/2206.02336)
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari,
Henryk Michalewski, Jacob Austin, David Bieber,
David Dohan, Aitor Lewkowycz, Maarten Bosma,
David Luan, et al. 2021. Show your work: Scratchpads for intermediate computation with language
models. arXiv preprint arXiv:2112.00114.
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec
[Radford, and Oleg Klimov. 2017. Proximal policy](http://arxiv.org/abs/1707.06347)
[optimization algorithms. CoRR, abs/1707.06347.](http://arxiv.org/abs/1707.06347)
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu,
Junxiao Song, Mingchuan Zhang, YK Li, Y Wu, and
Daya Guo. 2024. Deepseekmath: Pushing the limits
of mathematical reasoning in open language models.
_arXiv preprint arXiv:2402.03300._
Hao Sun, Hengyi Cai, Bo Wang, Yingyan Hou, Xiaochi Wei, Shuaiqiang Wang, Yan Zhang, and Dawei
Yin. 2023. Towards verifiable text generation with
evolving memory and self-reflection. arXiv preprint
_arXiv:2312.09075._
[Denis Tarasov and Kumar Shridhar. 2024. Distilling](http://arxiv.org/abs/2402.01812)
[llms’ decomposition abilities into compact language](http://arxiv.org/abs/2402.01812)
[models.](http://arxiv.org/abs/2402.01812)
Shubham Toshniwal, Ivan Moshkov, Sean Narenthiran, Daria Gitman, Fei Jia, and Igor Gitman. 2024.
Openmathinstruct-1: A 1.8 million math instruction
tuning dataset. arXiv preprint arXiv:2402.10176.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aurelien Rodriguez, Armand Joulin, Edouard
[Grave, and Guillaume Lample. 2023. Llama: Open](http://arxiv.org/abs/2302.13971)
[and efficient foundation language models.](http://arxiv.org/abs/2302.13971)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le,
[and Denny Zhou. 2022. Chain of thought prompt-](https://openreview.net/forum?id=_VjQlMeSB_J)
[ing elicits reasoning in large language models. In](https://openreview.net/forum?id=_VjQlMeSB_J)
_Advances in Neural Information Processing Systems._
Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He,
Shengping Liu, Bin Sun, Kang Liu, and Jun Zhao.
[2023. Large language models are better reasoners](http://arxiv.org/abs/2212.09561)
[with self-verification.](http://arxiv.org/abs/2212.09561)
Zhuofeng Wu, He Bai, Aonan Zhang, Jiatao Gu,
VG Vinod Vydiswaran, Navdeep Jaitly, and Yizhe
[Zhang. 2024. Divide-or-conquer? which part should](http://arxiv.org/abs/2402.15000)
[you distill your llm?](http://arxiv.org/abs/2402.15000)
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Tom Griffiths, Yuan Cao, and Karthik Narasimhan.
2024. Tree of thoughts: Deliberate problem solving
with large language models. Advances in Neural
_Information Processing Systems, 36._
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo
[Li, and Yu Li. 2023. Progressive-hint prompting](http://arxiv.org/abs/2304.09797)
[improves reasoning in large language models.](http://arxiv.org/abs/2304.09797)
-----
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H.
[Chi. 2023. Least-to-most prompting enables com-](https://openreview.net/forum?id=WZH7099tgfM)
[plex reasoning in large language models. In The](https://openreview.net/forum?id=WZH7099tgfM)
_Eleventh International Conference on Learning Rep-_
_resentations._
**A** **Training Data Creation**
The data was generated using GPT-4. A temperature of 0.7 is used to ensure diversity in the generated data. We only stored the sub-question, subanswer dataset if the number of sub-questions generated was more than three, this was done to ensure
high data quality so that the model is able to decompose longer and more difficult questions effectively.
First, we generate all the concepts, then the subquestions given the question and the gold chain
of thought. Finally, we generate the sub-answer
given the question, a gold chain of thought and
the sub-question to be answered. for the verifier,
we first ask the LLM to answer the given question
using standard COT prompting. Then based on
the correctness of the answer, we take the solution
chain of thought produced by the LLM and the gold
answer and ask the LLM to classify the produced
solution based on the mistake made. If the answer
is correct, we store it separately and include it to
make up to 10% of the dataset with the label as ’No
Mistake’. Prompts for the data curation are given
below.
**A.1** **Verifier Data Creation**
**A.1.1** **Prompt**
You are a teacher, and you are grading
a student’s answer to a question. The
student’s answer is as follows: {COT_LLM}
The correct answer is as follows:
{COT_gold} Please provide feedback to
the students on the mistakes they have
made. You need to fill out a rubric and
classify the mistakes into the following
categories:
1. Conceptual Mistakes: The student has
misunderstood the concept or has applied
the wrong concept.
2. Computational Mistakes: The student
has made a mistake in the calculations.
3. Procedural Mistakes: The student has
made a mistake in the procedure of solving
the problem.
4. Mistake in understanding the question:
The student has made a mistake in
understanding the question.
5. Mistake in the first step: The student
has made a mistake in the first step of
the solution.
6. Mistake in the first half: The student
has made a mistake in the first half of
the solution.
7. Mistake in the second half: The student
has made a mistake in the second half of
the solution.
8. Mistake in the last step: The student
has made a mistake in the last step of
the solution.
9. No mistake: The student has not made
any mistake.
Please first provide feedback then
fill the rubric and then finally tell
your feedback to the student in between
<feedback> and </feedback> tags as shown
below:
For example, if you want to tell the
student that they have made a mistake in
the first step and a conceptual mistake,
then you need to write the following:
<feedback> 1,4 </feedback> Do not write
anything else in between <feedback> and
</feedback> tags except the numbers.
Now, please provide feedback to the
student on the mistakes they have made.
**A.2** **Decomposer Data Creation**
**A.2.1** **Concepts data creation**
I have a question’s solution, tell me
all the specific concepts, theorems and
formulas (separated by a comma,) used in
it. An example is given below.
Question: How many primes are in the row
of Pascal’s Triangle that starts with a 1
followed by a 6?
Answer: If the row contains a 1, then
a 6, then the binomial coefficients must
be 6 and 6 . All we need to check now
0 1
are 6 and 6, since 6 = 6, 6 = 6
2 3 0 6 1 5
, and 6 = 6 . 6 = 6!
6 = 6!2 4 2 4!× 2! [= 15] [, and]
3 3! 3! [= 20] [. None of those is prime,]
_×_
so there are 0 prime numbers in the given
row.
Concepts: Coefficients in Pascal’s
Triangle, Binomial Coefficients Formula,
Prime Numbers
Question: question
-----
Answer: answer
Concepts:
**A.2.2** **Sub-question data creation**
I have a question, it’s a solution and a
sub-question.
Your task is to break the question into
sub-questions based on the steps in the
answer.
Keep the following tips in mind:
1. Make sure not to break the
question into trivial sub-questions, the
sub-questions should be informative.
2. The sub-questions should not require
multiple steps to answer, something like
2-3 steps to solve is ideal.
3. One way to break the question could
be to identify what all quantities are
required in the question by observing it’s
answer and then try to frame sub-questions
based on the unknown entities.
4. Make sure to put each question in the
question tag like $ question(What is the
acceleration of the car as a function of
time?)$
One example is given below.
Question: How many primes are in the row
of Pascal’s Triangle that starts with a 1
followed by a 6?
Answer: If the row contains a 1, then
a 6, then the binomial coefficients must
be 6 and 6 . All we need to check now
0 1
are 6 and 6, since 6 = 6, 6 = 6
2 3 0 6 1 5
, and 6 = 6 . 6 = 6!
6 = 6!2 4 2 4!× 2! [= 15] [, and]
3 3! 3! [= 20] [. None of those is prime,]
_×_
so there are 0 prime numbers in the given
row.
Sub-questions:
$ question(How can the first two numbers
be represented in form of binomial
coefficients?)$, $ question(What are the
values of all the coefficients in the
row?)$, $ question(How many of the above
numbers are prime?)$
Question: question
Answer: answer
Sub-questions:
I want you to answer the subquestion along
with an explanation.
Make sure to put the sub-answer in
the answer tag like $sub-answer(The
acceleration of the car at time t =
2 seconds is speed / time = 2m/s/2s =
1m/s[2])$
Think step by step.
Question: question
Answer: answer
Sub-question: sub-question-array[i]
Sub-Answer:
**A.3** **Sub-answer data generation**
I have a question, it’s solution and a
sub-question.
-----
| [
"Subhabrata, Dutta",
"Gurusha, Juneja",
"Tanmoy, Chakraborty"
] | 2024-04-02T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2404.02255v1 | https://arxiv.org/abs/2404.02255 | null |
A Generation-based Deductive Method for Math Word Problems | Math word problems (MWP) involving advanced operators such as linear equation solver cannot be easily tackled by earlier MWP methods, because the existing generation methods suffer from repeated sub-expression generation and deductive methods are restricted to dealing with binary operations. This paper propose a new multivariate directed acyclic graph (mDAG) as an alternative to the generation methods’ binary expression tree or the deductive methods’ binary directed acyclic graph. Then to produce the topological ordering of mDAG, we propose a generation-based deductive (GeDe) model, which equips a generation model with a re-encoder to keep the deductive property but avoid the expensive enumeration of the deductive methods. GeDe performs well on math problems with many operators on the widely used benchmarks as well as solving multivariate operators on our own CMWPA benchmark. Our code is available at https://github.com/hyx1999/GeDe | null | # A Generation-based Deductive Method for Math Word Problems
**Yuxuan Hu[1,2], Jing Zhang[1,3][∗], Haoyang Li[1,3], Cuiping Li[1,3], Hong Chen[1,3]**
1School of Information, Renmin University of China, Beijing, China
2Key Laboratory of Data Engineering and Knowledge Engineering, MOE, China
3Engineering Research Center of Database and Business Intelligence, MOE, China
{huyuxuan1999,zhang-jing,lihaoyang.cs,licuiping,chong}@ruc.edu.cn
**Abstract**
Math word problems (MWP) involving advanced operators such as linear equation solver
cannot be easily tackled by earlier MWP methods, because the existing generation methods
suffer from repeated sub-expression generation
and deductive methods are restricted to dealing with binary operations. This paper propose a new multivariate directed acyclic graph
(mDAG) as an alternative to the generation
methods’ binary expression tree or the deductive methods’ binary directed acyclic graph.
Then to produce the topological ordering of
mDAG, we propose a generation-based deductive (GeDe) model, which equips a generation
model with a re-encoder to keep the deductive property but avoid the expensive enumeration of the deductive methods. GeDe performs well on math problems with many operators on the widely used benchmarks as well
as solving multivariate operators on our own
CMWPA benchmark. Our code is available at
[https://github.com/hyx1999/GeDe](https://github.com/hyx1999/GeDe)
**1** **Introduction**
Solving Math Word Problems (MWPs) is the
task of answering natural language problems that
require mathematical reasoning ability (Bobrow,
1964). To achieve such a skill, researchers have
proposed a variety of MWP solvers, each of which
seeks to produce a specific logic form that can be
used to calculate the answer to the problem.
Deductive methods and generation-based methods are typically the two main approaches used
to solve MWPs. Inspired by advances in machine translation, some generation-based methods
directly adopt a sequence-to-sequence (seq2seq)
model to generate the sequence of the math expression according to the problem (Wang et al.,
2017). To further capture the structure of the math
expression, some sequence-to-tree (seq2tree) methods (Xie and Sun, 2019) adopt a tree decoder to
_∗*Corresponding author._
generate the binary expression tree, where each
node denotes an operator or a quantity. These
generation-based methods, however, suffer from a
fatal flaw in that they require repeated generation
of the same sub-expression (or sub-tree), which
makes them inefficient. For example, in Figure 1
(a), the sub-expression (94 − 35 × 2) ÷ (4 − 2) is
generated four times. Humans, on the other hand,
can represent repeated sub-expressions with an intermediate quantity that can be naturally reused in
the following computation process.
Deductive approaches (Cao et al., 2021; Jie et al.,
2022) are suggested to address the aforementioned
reuse issue. Specifically, deductive methods convert the math expression into a binary Directed
Acyclic Graph (bDAG), where each node represents an operation that consists of a binary operator
and two input quantities. The calculation result of
an operation is represented by a new intermediate
quantity. Then, these methods need to generate a
topological ordering, i.e., an operation sequence,
of the bDAG. By doing this, subsequent operations
can easily reuse the previously generated intermediate quantities. As shown in Figure 1 (b), quantity q3
represents the sub-expression (94−2×35)÷(4−2),
which is then reused by two subsequent operations
denoted by quantity q4 and q8. When the operation
sequence is inferred, these operations are computed
consecutively to produce the final answer. Beyond
the ability to reuse the intermediate quantity, deductive methods are more interpretable because the
step-by-step generation of operations helps people
understand how the reasoning works. To generate
the operation at each reasoning step, existing deductive methods follow an “enumerate-then-classify”
procedure. To be more precise, they create a collection of candidate operations by listing every possible combination of the quantities and operators,
and then they use a classifier to choose the operation that has the highest probability, which can be
viewed as a greedy search strategy.
1737
-----
**Question:**
_There are several chickens and rabbits in a cage. Inside, we observe 94 feet and 35 heads. A chicken has 1 head and 2 feet. A rabbit has 1_
_head and 4 feet. The number of rabbits and chickens are denoted by x and y, respectively. Tell me the value of 𝒙× 𝒙+ 𝒚×𝒚._
**Answer:**
673
**Mathematical Expression:**
**((94-2×35)÷(4-2))×((94-2×35)÷(4-2)) +(35- 1× ((94-2×35)÷(4-2))) ÷1)×(35- 1× ((94-2×35)÷(4-2))) ÷1)**
**Binary Expression Tree**
+
× ×
÷ ÷ ÷ ÷
**-** **-** **-** **-** **-** 1 **-** 1
94 × 4 2 94 × 4 2
35 × 35 ×
2 35 2 35
1 ÷ 1 ÷
**-** **-** **-** **-**
94 × 4 2 94 × 4 2
2 35 2 35
Pre-order traversal of the above binary expression tree:
**Binary Directed Acyclic Graph**
𝑞( (Answer)=𝑞#+𝑞'
𝑞%=𝑞&÷1 𝑞'=𝑞%×𝑞% 𝑞#=𝑞$×𝑞$
𝑞&=35−𝑞* 𝑞*=1×𝑞$
𝑞!=2×35 𝑞"=94−𝑞! 𝑞)=4−2 𝑞$=𝑞"÷𝑞)
One topological ordering of the above bDAG:
Two intermediate An advanced
|Multivariate Directed Acyclic Graph (our method)|Col2|
|---|---|
|𝑞* (Answer)=𝑞)+𝑞$ 𝑞)=𝑞!×𝑞! 𝑞$=𝑞"×𝑞"||
|𝑞!,𝑞" = linear equation solver (|1 1 35, ) 4 2 94|
|||
quantities operator 6 operands
𝑞$ 𝑞* One topological ordering of the above mDAG:
|𝑞!|Col2|𝑞"|Col4|𝑞)|Col6|𝑞$|𝑞*|
|---|---|---|---|---|---|---|---|
|× ÷|-|94|Col4|
|---|---|---|---|
|𝑞(|Col2|𝑞#|Col4|Col5|𝑞'|Col7|𝑞%|𝑞&|
|---|---|---|---|---|---|---|---|---|
|𝑞!,𝑞"|Col2|𝑞)|Col4|Col5|𝑞$|Col7|𝑞*|
|---|---|---|---|---|---|---|---|
**+**
- 4 2
(a) The generation method (b) The deductive method
(c) The generation-based deductive method
Figure 1: Illustration of a MWP example with a natural language input problem and a corresponding mathematical
expression output that can be used to calculate the answer. The repeated sub-expression is underlined in red. In
order to get the answer, three methods are presented: (a) a seq2seq or seq2tree generation method to generate an
expression sequence or a binary expression tree; (b) a deductive method to reason out a topological ordering of the
bDAG; and (c) the proposed generation-based deductive method to generate a topological ordering of the mDAG.
One obvious limitation of the aforementioned approaches is that they only take into account the basic binary operators such as +, −, ×, ÷. Although
binary operators are the most fundamental in mathematics, there are some templated problems, such
as solving linear equations, finding the extreme values of quadratic equations, and even integrating a
function, that can be solved by existing advanced
operators. Thus, we can abstract an advanced op**erator to tackle each templated problem. With**
these advanced operators, we can inject prior mathematical knowledge to reduce the difficulty of solving MWPs. However, problems requiring advanced
operators are difficult to tackle using earlier MWP
methods: generation-based methods inherently suffer from the reuse issue; deductive methods are
limited by the assumption of binary operations.
To address this issue, we first define a multivariate Directed Acyclic Graph (mDAG) with each
node involving a multivariate operation that consists of a basic or advanced operator and multiple
input quantities. Compared to basic binary operators, advanced operators can receive multiple quantities and return multiple output quantities. For
example, in Figure 1 (c), a linear equation solver
requires 6 quantities (1, 1, 4, 2, 35, 94) and returns
2 intermediate quantities (q0, q1). Then, similar to
the bDAG, we use the topological ordering of the
mDAG to obtain a sequence of multivariate operations. To generate such a sequence, we propose
GeDe, a Generation-based Deductive approach.
Compared to generation-based techniques, GeDe
has the deductive property that enables the reuse
of intermediate quantities. Compared to deductive methods, GeDe employs a generation model to
generate each multivariate operation, which avoids
the need to enumerate a large number of possible
multivariate operations.
In order to achieve this generative-based deductive capacity, we equip a generation model with a
re-encoding strategy that jointly encodes the problem and intermediate quantities at each step of
reasoning, yielding embeddings of the intermediate quantities that could be reused in the subsequent steps. In addition, we switch from the traditional greedy or beam search to a hierarchical beam
search strategy, which is well suited to the equation
generation requirement.
**Contributions. (1) By extending bDAG to mDAG,**
we can directly address complex mathematical
problems using pre-defined advanced operators.
(2) We propose GeDe, a generation-based deductive model that keeps the deductive property while
avoiding the high cost of enumeration. GeDe
equips a generation model with the re-encoding
and hierarchical beam search strategies to achieve
the objective. (3) We automatically create a dataset
named CMWPA for solving complicated MWPs
that require both the basic binary operators and
the advanced operators. It has been shown that
GeDe not only effectively adapts advanced operators but also performs better on three existing MWP
1738
-----
datasets when more operations are involved.
**2** **Related Work**
**2.1** **Math Word Problem**
Early efforts to solve MWPs use rule-based approaches, which are only able to address a limited
number of MWP scenarios (Kushman et al., 2014;
Liguda and Pfeiffer, 2012; Roy and Roth, 2018).
Deep learning models, on the other hand, are better
capable of addressing a wider range of MWPs. The
first seq2seq model for MWPs is proposed by Wang
et al. (2017). This model employs RNN to encode
the problem and produce mathematical expressions.
To enhance the seq2seq model, additional techniques have been developed, including reinforcement learning (Huang et al., 2018), template-based
methods (Wang et al., 2019), and group attention
mechanisms (Li et al., 2019). Seq2tree, a tree
structure decoder, is developed by Xie and Sun
(2019). It replaces the original sequence decoder
and greatly outperforms seq2seq models in terms of
performance. KA-S2T (Wu et al., 2020) and MWPBERT (Liang et al., 2022) inject commonsense
knowledge and quantities’ properties to improve
model performance. In order to encode the relationships between quantities in MWPs, Graph2tree (Li
et al., 2020; Zhang et al., 2020) encodes the input
problem using graph neural networks.
In addition to the generation models with
seq2seq, seq2tree, or graph2tree structures, other
efforts use deductive methods to solve MWPs step
by step rather than directly generating the entire expression. Cao et al. (2021) represent the calculation
process by bDAG and extract the bDAG structure
by aggregating quantities and sub-expressions iteratively. Jie et al. (2022) view the task as a complex
relation extraction problem and predict the relation of two quantities gradually. Compared with
generation methods, deductive methods can easily
employ the intermediate values to avoid repetitive
generation. We expand the deductive methods to
handle more complex advanced operators.
**2.2** **Large-scale Pre-trained Language Model**
In-context few-shot learning or even zero-shot
learning based on large-scale pre-trained language
models, such as GPT-3 (Brown et al., 2020),
PaLM (Chowdhery et al., 2022), and OPT (Zhang
et al., 2022), has been thoroughly studied for multiple tasks, including math word problem solving (Cobbe et al., 2021; Wang et al., 2022; Wei
et al., 2022). This tuning-free methods have
achieved promising performance, and their success
mainly relies on the reasoning power of large-scale
PLMs. However, the reasoning power is extremely
expensive due to the large number of parameters,
massive pre-training data, carefully designed pretraining objectives, and huge overhead of computational resources. In contrast, we investigate finetuning the small models.
**3** **Problem Definition**
The goal of MWP is to generate a specific logic
form that can be executed to answer the problem
_P =_ _p1, p2, .., pn_ which consists of n tokens
_{_ _}_
and m quantity tokens Q = _q1, q2, ..., qm_ . Some
_{_ _}_
commonsense constants, such as π and e, may not
explicitly appear in the problem; thus, we additionally add them to the quantity set Q.
In this paper, we define the multivariate[1] directed
acyclic graph (mDAG) as our target logic form,
which describes the process of solving MWPs. The
nodes of mDAG denote operations that consist
of an operator and multiple quantities, and the
edges represent the dependency between nodes.
Our goal is to generate a operation sequence O =
(o[1], o[2], ..., o[|][O][|]) which can be obtained from the
topological ordering of mDAG. |O| is the number
of operations. The t-th operation is a sequence of
tokens o[t] = (a[t]1[, a][t]2[, ..., a][t]|o[t]|[)][ with each token rep-]
resenting an operator or a quantity. Each operator
is selected from the operator set V, which is predefined by the provided dataset. Each quantity is
choosen from Q, which is initialized with the m
quantity tokens in P and can gradually grow as the
steps of reasoning progress. |o[t]| is the number of
tokens of the t-th operation.
**4** **Approach**
**4.1** **Overview**
In general, the proposed GeDe method consists
of two main components: the re-encoder and decoder. The former aims to jointly encode the problem and quantities, which can support the reuse
of intermediate quantities. The latter is designed
to generate an operation according to the output
of the re-encoder. Since our target is an operation
sequence, we need to perform multiple reasoning
steps, with each step generating an operation. We
illustrate the reasoning process in Figure 2. At each
1The term "multivariate" means that the operator can receive multiple quantities and output multiple quantities.
1739
-----
**Problem 𝑷𝒓** **Reasoning step 1** **Reasoning step 2**
initialize inputs update inputs update inputs
_There are several chickens_ 𝑃[(] = 𝑃 𝑃['] = Concat(𝑃[(],𝑜')
_and rabbits in a cage. Inside,_
_we can observe 𝑞! feet and 𝑞"_
_heads. A chicken has 𝑞# head_ ℳ)(𝑃[!]) ℳ)(𝑃["])
_and 𝑞$ feet. A rabbit has 𝑞%_
_head and 𝑞& feet …_ 𝑹[!] 𝑸[!]: [𝒒!, …, 𝒒&] 𝑹["] 𝑸["]: [𝒒!, …, 𝒒;]
Note: The specific quantities
in the original problem 𝑃 are
replaced by 𝑞!~ 𝑞&. ℳ:(𝑹[!], 𝑸[!]) ℳ:(𝑹["], 𝑸["])
𝑜": 𝑞!,𝑞"=linear equation solver( [𝑞]𝑞[#]% 𝑞𝑞$& _[,][ 𝑞]𝑞['](_ [)] 𝑜#: 𝑞)=𝑞!×𝑞!
Figure 2: Illustration of iteratively generating the operation sequence by the proposed GeDe. At each reasoning
step, GeDe re-encodes the input by adding new intermediate quantities and then generates a new operation.
reasoning step, we update the input sequence by
adding new intermediate quantities generated in the
previous step. The updated input sequence is fed
into the re-encoder and the decoder to generate an
operation. The generation process is equipped with
a hierarchical beam search strategy to enable both
token-level beam search within an operation and
operation-level beam search in the whole operation
sequence.
**4.2** **Re-Encoder**
This section delves into the re-encoder by explaining the input and the encoder respectively.
Since we are only interested in the semantics of
the quantities rather than their precise values, we
first substitute each quantity in the original problem P with a general special token, [QTTi]. This
leaves Pr devoid of any specific quantities. In order
to obtain the encoder’s input sequence, Pin[t] [, we con-]
catenate Pr with all intermediate quantities, where
each quantity signifies its corresponding operation.
We take the example in Figure 2 to explain the
input. The given math problem contains six quantities, which are replaced by [QTT0] to [QTT5].
At reasoning step t, we have already generated the
following operation:
quantities [QTT6] and [QTT7] and concatenate
the sequence
[QTT6][QTT7][=][LES][QTT2][QTT4] · · · [QTT1] (2)
with the original input Pr to obtain Pin[t] [.]
We instantiate the re-encoder ME by a PLM
(e.g., BERT or GPT) to represent the input sequence and obtain the reasoning state, i.e.,
**R[t]** = ME(Pin[t] [)][,] (3)
where R[t] _∈_ _R[N]_ _[×][H]_ represents the reasoning state
at step t. N denotes the length of the input sequence and H denotes the hidden size.
For the subsequent generation module, we extract the representation of each quantity from R[t]
according to their positions in Pin[t] [:]
**Q[t]** = **R[t][i]** _i_ _Iq_ _,_ (4)
_{_ _|_ _∈_ _}_
where Q[t] _∈_ _R[M]_ _[×][H]_, M denotes the number of
quantities, Iq saves the indexes of all the quantities
in Pin[t] [, and][ R][t][[][i][]][ denotes the][ i][-th row of][ R][t][.]
In summary, the original input is re-encoded with
the previously generated intermediate quantities at
each reasoning step to update the reasoning state
and record all intermediate quantities, which may
be reused in the subsequent generation process.
**4.3** **Decoder**
We adopt a Gated Recurrent Unit (GRU) network
(Chung et al., 2014) combined with the attention
mechanism (Vaswani et al., 2017) as the decoder
_D. Following the majority of the earlier works_
_M_
(Liang et al., 2022; Tan et al., 2021; Xie and Sun,
2019), we choose GRU instead of transformer for a
fair comparison. Although some works choose pretrained transformer (Shen et al., 2021), their performance might not be improved due to the larger
parameters but limited labeled data.
[LES] [QTT2] [QTT4] [QTT0] (1)
[QTT3] [QTT5] [QTT1]
= [LES][QTT2][QTT4][QTT3][QTT5][QTT0][QTT1]
where LES stands for a multivariant operator of linear equation solver given the operands of a matrix
made up of [QTT2], [QTT3], [QTT4], [QTT5]
and a vector made up of [QTT0] and [QTT1]. In
practice, the operation is represented by a sequence
that expands the matrix and vector by row. Then
we denote the outputs of this operation by two new
1740
-----
**Operation Generation. The decoder aims to pro-**
vide an operation o[t] = (a[t]1[, a][t]2[, ..., a][t]|o[t]|[)][ at each]
reasoning step t. To enable the auto-regressive
generation, we insert a special beginning token
([BOS]) before the first token a[t]1 [and add a special]
ending token ([EOS] or [EOO]) after the last token a[t]o[t] [to re-create][ o][t][ = (][a]0[t] _[, a][t]1[, a][t]2[, ..., a][t]o[t]_ +1[)][.]
_|_ _|_ _|_ _|_
While [EOS] only signifies the termination of the
current operation, [EOO] stands for the final token of the complete operation sequence. The hidden state h[t]i [of each token][ a]i[t] [can be obtained by]
**h[t]i** [= GRU(][h]i[t] 1[,][ a]i[t][)][ where][ h][t]i 1
resents the hidden state of the previous step,− _−_ _[∈]_ _[R][1][×][H][ rep-] h[t]0_
is initialized from the hidden state of the [CLS]
token produced by the encoder, and a[t]i
is the representation of the token a[t]i[. Next, us-][∈] _[R][1][×][H]_
ing h[t]i [as the query to attend to current reasoning]
state R[t], we obtain the attention-enhanced state
**A[t]i** [= MHA(][h]i[t][,][ R][t][)][, where][ MHA][ denotes multi-]
head attention (Vaswani et al., 2017). Finally, we
determine the likelihood of the output token by
determining how well A[t]i [resembles the represen-]
tation of quantities and operators, i.e.,
_p(a[t]i[|][o][<t][, a]<i[t]_ _[, P]_ [) = softmax(][A]i[t][([][V][ |][ Q][t][])][T][ )][,][ (5)]
where o[<t] represents o[1], o[2], ..., o[t][−][1] before reasoning step t, a[t]<i [represents][ a]0[t] _[, a][t]1[,][ · · ·][, a][t]i_ 1 [before]
_−_
the i-th token of step t, | is the matrix concatenation operator, V ∈ _R[|][V][ |×][H]_ and Q[t] _∈_ _R[M]_ _[×][H]_ denote the representations of operators and t-th step’s
quantities respectively. When obtaining a new operation o[t], we can determine the number of new
quantities by the operator in o[t] and record these
new intermediate quantities for the subsequent reasoning steps. When [EOS] has the highest probability, the decoding process of the current operation
ends but a new operation generation starts instead.
When [EOO] has the highest probability, the entire
decoding process is complete.
**Training Objective. Given a problem P and its**
ground truth operation sequence O, we maximize
the probability of generating O by P, i.e.,
a refined version of greedy search (Tillmann and
Ney, 2003). However, using beam search in the
deductive methods is difficult because the search
space of the operation sequence is nested. In other
words, we need to generate each operation based on
tokens and generate the entire operation sequence
based on operations. Therefore, previous deductive methods (Cao et al., 2021; Jie et al., 2022)
only adopt the greedy search and leave the implementation of the beam search as further work. To
address this challenge, we propose a hierarchical
_beam search strategy. Compared with the tradi-_
tional beam search, the hierarchical beam search
can control the generation process at two levels.
Specifically, the hierarchical beam search consists an inner beam search and an outer beam
search. The former is a standard beam search which
seeks a series of tokens to form a candidate operation. The latter is designed to search a complete
operation sequence. The beam score of the inner
beam search purely relies on the probabilities of
tokens predicted by the decoder. Suppose the t-th
step generates l tokens, the inner beam score ibs[t]
is calculated as:
1
_l = [1]_
_ibs[t]_ = log
_p(a[t]i[)]_
_i=1_
Y
log p(a[t]i[)][,] (7)
_i=1_
X
where p(a[t]i[)][ is computed by Eq][ (][5][)][. We use the]
inner beam scores of generated operations to approximate the distribution of operations to support
the outer beam search. The probability of the t-th
operation o[t] can be calculated as the softmax score
of its inner beam score, i.e.,
_p(o[t]) = softmax(exp(ibs[t]))._ (8)
Suppose the entire operation sequence contains
_T operations, the outer beam score is computed as:_
1
_T = [1]_
_obs = log(_ _p(o[t]))_
_t=1_
Y
log p(o[t]). (9)
_t=1_
X
Algorithm 1 presents the hierarchical beam
search algorithm. Each outer beam is denoted by
the symbol beam, which keeps track of both the
current operation sequence and the beam score.
The empty operation sequence and score of zero
are used to construct the initial outer beam initially (line 1). Then, we iteratively expand outer
beams until they are all finished, i.e., all the outer
beams are terminated with [EOO] (line 4-14). For
_|o[t]|+1_
_p(a[t]i[|][o][<t][, a][t]<i[, P]_ [)][.] (6)
_i=1_
Y
_|O|_
_t=1_
Y
_p(O|P_ ) =
**4.4** **Hierarchical Beam Search**
To enhance the generation quality during inference,
beam search is used in many generation tasks as
1741
-----
**Algorithm 1 Hierarchical Beam Search**
**Input: Math World Problem P**, Beam size K
**Output: beams with Top-K operation sequences**
1: beams ← [InitialBeam];
2: while not all beams are over do
4:3: _beamsfor beamn ← in beams[];_ **do**
5: **if beam is over then**
6: _beamsn.append(beam);_
7: **else**
8: _ops ←_ InnerBeamSearch(P, beam, K);
9: **for op in ops do**
11:10: _beamsbeamnewn.append ←_ Extend((beambeam, opnew); );
12: **end for**
13: **end if**
14: **end for**
15: _beams ←_ GetTopK(beamsn, K);
16: end while
widely-adopted MWP datasets to show the effectiveness of our model on binary operations.
**5.1** **Experimental Setup**
**Datasets.** We consider four MWP datasets including our created CMWPA and three widelyused existing MWP datasets: MAWPS (KoncelKedziorski et al., 2016), Math23k (Wang et al.,
2017), MathQA (Amini et al., 2019), and SVAMP
(Patel et al., 2021). We use CMWPA to verify the
validity of multivariate operations. Following (Tan
et al., 2021), we perform pre-processing to filter out
unsolvable problems. In all the datasets, we take
into account the basic binary operators addition
(+), subtraction (−), multiplication (×), division
(÷), and exponentiation (ˆ). For advanced operators used in the CMWPA dataset, we consider the
linear equation solver, the quadratic function extremum solver, and the quadratic function integral
solver. Appendix A.2 presents the statistics for
each dataset.
**Evaluation Metric. Following previous work (Jie**
et al., 2022), we compare the predicted and the gold
answer to calculate the accuracy as the evaluation
metric. We parse out the operator and operands
from the model predicted expression sequence and
then use the corresponding operator executor to
calculate the answers. We explain the details of the
parsing and execution in Appendix A.3.
**Implementation Details. We adopt RoBERTa-**
base[2] (Liu et al., 2019) as our re-encoder for English datasets, and Chinese-RoBERTa-base[3] (Cui
et al., 2020) for Chinese datasets. The purpose of
using the Roberta model is to make a more fair
comparison with previous work. We can also use
unidirectional attention models (e.g., GPT). We
use AdmaW to optimize the loss function with a
learning rate of 2e-5, a weight decay of 1e-2, and
a batch size of 8. During inference, the beam size
_K is set to 4 by default. For CMWPA, Math23K,_
MathQA, and SVAMP we report accuracy on their
test set. For MAWPS and Math23k, we follow previous works and also report 5-fold cross-validation
performance. We conduct all experiments with a
RTX 3090 (24G) GPU.
2https://huggingface.co/roberta-base
3https://huggingface.co/hfl/chinese-roberta-wwm-ext
each extensible outer beam, we search candidate
operations ops using the inner beam search (line
8). The inner and the outer beam search share the
same beam size K. Next, we extend outer beams
with these candidate operations (line 9-12). At
the end of each step, we only maintain the top-K
outer beams according to their scores computed by
Eq. (9) (line 15). Finally, beams save the top-K
operation sequences. We discuss the complexity of
GeDe in Appendix A.1
**4.5** **Decoding Constraint**
Logic forms need to obey clear grammatical rules.
In order to guarantee the validity of the output,
we provide two constraint strategies, one during
and one after the decoding process. Inspired by
PICARD (Scholak et al., 2021), an incremental
grammar checker proposed for Text-to-SQL task,
the constraint strategy during the decoding process
is to filter out illegal beams at each decoding step
in the inner beam search to prevent potential syntax errors in the generated operation. For example,
when we detect that the current token generation
step needs to generate an operator, we will reject
all non-operators. Following (Jie et al., 2022), the
after decoding constraint strategy eliminates candidate operations that are improbable to exist in realworld mathematical problems, such as “[QTTi]
_−_
[QTTi]” and “[QTTi][[QTT][i][]]”.
**5** **Experiments**
In this section, we establish a dataset for multivariant advanced operators and show that the proposed
GeDe is capable of doing these types of operations
successfully. We also conduct experiments on four
1742
-----
Model MAWPS 5-fold Math23k Test Set Math23k 5-fold MathQA Test Set SVAMP Test Set
GroupAttn (Li et al., 2019) 76.1 69.5 66.9 - 21.5
mBERT+LSTM (Tan et al., 2021) - 75.1 - 77.1 -
RoBERTaGen (Lan et al., 2022) 88.4 - 76.9 76.6 30.3
Generate&Rank (Shen et al., 2021) 84.0 **85.4** **84.3** - -
GTS (Xie and Sun, 2019) 82.6 75.6 74.3 - 41.0
Graph2Tree (Zhang et al., 2020) 85.6 77.4 75.5 69.5 43.8
HMS (Lin et al., 2021) 80.3 76.1 - - -
MultiE&D (Shen and Jin, 2020) - 78.4 76.9 - -
BERT-CL (Li et al., 2022) - 82.4 - 73.8 -
MWP-RoBERTa (Liang et al., 2022) - 84.5 82.0 76.6 -
S2S
S2T/G2T
RoBERTa-DR (Jie et al., 2022) 92.0 85.1 83.0 78.6 **47.3**
DR
GeDe **92.3** **85.4** 84.2 **81.5** 45.7
Table 1: Accuracy on three existing MWP datasets (%).
**5.2** **Experiment on CMWPA**
The existing MWP datasets only use basic binary
operators as target logic form. Rewriting these
logic forms to support advanced operators is expensive. Therefore, based on handcraft templates,
we create a synthetic dataset named CMWPA
(Complex Math Word Problems with Advanced
operators).
To create the CMWPA dataset, we first define
needed operators which include five binary operators (addition (+), subtraction (−), multiplication (×), division (÷), and exponentiation (ˆ)), as
well as three advanced operators, which can be
used to solve linear equations (the [linear equation solver] operator), find the maximum value
of quadratic functions (the [quadratic function extremum solver] operator), and find the definite integrals of quadratic functions (the [quadratic function
integral solver] operator). For each operator, we
write one or more templates to generate a text description and its operation. We only consider the
quadratic function because the operations related
to the quadratic function can be transformed to a
series of binary operations for training the baseline
model. The templates of CMWPA is described in
Appendix A.4. In this dataset, for each problem,
we provide two types of logic forms: multivariate
operation sequence and binary operation sequence.
An example is given in Appendix Table 5.
We conduct experiments on CMWPA to demonstrate that using advanced operators to solve complex MWPs is more effective than only using basic
binary operators. Concretely, our proposed GeDe
is applied to generate multivariate operation sequences. Then for fair comparison, we adopt GeDe
to generate binary operation sequence.
**Experiment Results. Table 2 shows the accuracy**
and inference time on CMWPA, using mDAG as
Logic Form Accuracy Inference Time
BET 32.0 600 ms/per sample
mDAG 95.0 400 ms/per sample
Table 2: Accuracy (%) and time cost on CMWPA
of GeDe with different annotation (BET: Binary Expression Tree, mDAG: Multivariate Directed Acyclic
Graph).
annotation, GeDe achieves 95.0% accuracy, which
indicates that our proposed method can effectively
support advanced operators to generate the multivariate operation sequence. However, when using
the binary expression tree as the generation target,
GeDe only achieves 32.0% accuracy. Because the
average number of advanced operators in multivariate operation sequences is 2.98, which is significantly less than the average number of binary
operators (i.e., 35.03) in binary expressions tree,
using advanced operators to solve MWPs can essentially reduce the learning difficulty and lead to
improved both the accuracy and efficiency.
**5.3** **Experiment on Existing MWP Datasets**
**Baselines. The baselines can be broadly catego-**
rized into four groups, sequence-to-sequence(S2S),
sequence-to-tree(S2T), graph-to-tree(G2T), and
deductive-reasoning(DR), where the first three of
these are all generation-based methods but are instantiated with different encoders or decoders. We
select baselines having reported the performances
on at least one of the three datasets.
**Experiment Results. We start by running tests**
on MAWPS and Math23k. As shown in Table 1,
our model achieves promising performance on both
the datasets compared to previous state-of-the-art
(SOTA) methods. Given that MAWPS only has
an average of 1.41 binary operations, the proposed
1743
-----
GeDe only slightly improves 0.3% accuracy on
MAWPS compared to earlier baselines. This is
not enough to demonstrate the benefits of the proposed model. On Math23k, GeDe performs equally
well as the earlier SOTA method Generate&Rank.
However, Generate&Rank fine-tunes a mBARTlarge (Liu et al., 2020) model with 610M parameters. In contrast, GeDe only involves 126M parameters and thus reveals a better parameter-efficiency.
We further evaluate our method on MathQA, the
most challenging MWP dataset with an average of
4.25 binary operations, and show results in Table 1.
Our model greatly beats all baselines (+2.9%),
which demonstrates the model’s efficacy in handing complex MWPs. In summary, on three existing
MWP datasets, the performances of GeDe are on
par or better than those of the closest competitors.
SVAMP is also a challenging dataset that is manually created to evaluate a model’s robustness. On
this dataset, GeDe achieves an accuracy of 45.7%,
which can outperform the vast majority of baselines
except the DR model.
In addition, we conduct experiments based on
Roberta-large on the Math23k dataset. The model
achieves an accuracy of 86.7% on the Math23K test
set. Using Roberta-large improves the accuracy by
1.3% over using Roberta-base. This shows that
using a larger PLM improves the performance of
our method and outperforms the baseline Generate
& Rank model on the Math23K test set.
To further highlight the advantages of the proposed GeDe, following (Jie et al., 2022), we provide a fine-grained analysis on MathQA based on
various numbers of operations. To be more specific,
we compare our model with the most powerful
baseline RoBERTa-DR (Jie et al., 2022) and display the analysis results in Table 3. We observe that
GeDe performs better on samples with 1, 3, and 4
operations, particularly on samples with at least 5
operations. This comparison indicates our model is
more robust to problems requiring more reasoning
steps, because the designed re-encoder can capture
adequate interactions between the newly produced
quantities and the original problem.
**5.4** **Ablation Study**
In this section, we take a thorough ablation study
on MathQA dataset to verify the effectiveness of
the re-encode and the hierarchical beam search
strategies in the proposed GeDe.
**Effect of Re-encoder. The proposed re-encoder**
# Operations RoBERTa-DR GeDe
1 77.4 **78.0**
2 **83.5** 81.8
3 83.4 **85.1**
4 81.7 **84.0**
_≥5_ 71.4 **77.5**
Overall 78.6 **81.5**
Table 3: Fine-grained accuracy on MathQA (%).
Model variant Accuracy
GeDe **81.5**
- w/o dynamic quantity embeddings 80.3
- w/o re-encoder 75.8
- w/o hierarchical beam search 81.0
Table 4: Ablation study on MathQA (%).
in Section 4.2 can update both new quantities and
reasoning state at each reasoning step. We investigate the two functions respectively.
Instead of using dynamic quantity embeddings,
we develop a variant model with static quantity embeddings. In other words, instead of having distinct
embeddings updated based on various contexts in
various math problems, [QTTi] in various math
problems is assigned a unified embedding that is
updated globally. Note we keep re-encoding the
original problem with the newly produced quantities at each step t, but only the updated reasoning state R[t] is leveraged. The comparison results
in Table 4 show that without the dynamic quantity embeddings, the performance drops 1.2% on
MathQA’s test set. Since different MWPs’ quantities reflect different semantics, it is preferable for
them to be dynamically updated with their contexts.
Then we completely remove the re-encoder and
only allow the encoder to encode the original problem. Instead, we directly use the hidden state in the
decoder’s GRU network to represent the reasoning
state. Table 4 shows that without the re-encoder,
the performance drops 5.7%. In this variant model,
although the quantities are dynamically updated
according to various problems, the interactions between the quantities and the input problem are not
fully exploited as the re-encoder does.
**Effect of Hierarchical Beam Search. Previous de-**
ductive methods (Cao et al., 2021; Jie et al., 2022)
generate the operation sequence based on hierarchical greedy search, and regard the implementation
of beam search as a future challenge. We imple
1744
-----
ment hierarchical beam search in our GeDe to improve greedy search. We compare them, where
the beam size is set to 1 to create a greedy search.
As shown in Table 4, when the hierarchical beam
search is disabled (beam size = 4) and replaced
with the hierarchical greedy search (beam size =
1), the performance drops 0.5%. By observing
the inner and outer beam scores in the generation
process, for most of the samples, we find that the
score of the first beam is significantly greater than
that of the remaining beams, resulting in a relatively small gap between greedy and beam search.
This problem, also referred to as neural networks’
“over-confidence”, has been studied by some works
(Miao et al., 2021; Wang et al., 2021). Such improvement is left in the further.
**6** **Conclusion and Future Work**
This paper proposes a multivariant direct acyclic
graph (mDAG) to describe a math expression in
order to handle the advanced multivariant operators. Then to generate the topological ordering of
mDAG, we propose a generation model equipped
with a re-encode strategy to keep the deductive
property but avoid the expensive enumeration of
existing deductive methods. A hierarchical beam
search algorithm is implemented to enable the inner token and outer operation searches. Extensive
experiments on three standard datasets and one
automatically created dataset demonstrate the proposed model’s advantage in solving math problems
with binary operators and advanced operators.
**7** **Limitations**
From the time complexity analysis in Appendix A.1, we can see that our model will face
the efficiency issue when it needs to generate a
long operation sequence. At the same time, the reencode module needs to concatenate the problem
description with generated operations, which may
reach the input length limit of PLM. Therefore, our
future work will study how to compress the input
sequence during the generation process to address
above issues.
**8** **Ethics Statement**
For many years, public opinion has debated the
pros and cons associated with artificial intelligence
technology. One consensus is that advances in
technology may be used in a variety of scenarios,
leading to different influences. To provide an ethical analysis of this work and others on the same
line, we will address three aspects: the possible
positive or negative effects of our work, the impact
of harmful information in datasets, and the equality
and differences between different languages.
First, the point of studying MWP is to explore
the mathematical reasoning capabilities of artificial intelligence (Wei et al., 2022). However, the
developed models may still be applied to harmful
aspects, such as cheating in math exams.
On the other hand, the presence of harmful information in the training data may lead the model
to learn some implicit biases (Liang et al., 2021;
Steed et al., 2022). In our experiments, for the
three existing datasets, we exactly follow the experimental setup of previous works to pre-process
and remove the potential harmful information. For
our manually created dataset CMWPA, our templates also do not contain any harmful information.
However, in the inference phase, our model cannot
reject answers when the user provides malicious input. Therefore, we need to employ extra efforts to
avoid this issue when the model is deployed online.
Finally, we use both English and Chinese
datasets in our experiments to respect linguistic
equality and better take into account language differences. The experimental results validate the
robustness of our model across languages. Nevertheless, English and Chinese are the two most
popular languages, and we should make greater
efforts to concentrate on and preserve the development of minor languages in the field of natural
language processing (Zhang et al., 2021).
**Acknowledgments**
This work is supported by National Natural Science Foundation of China (62322214, 62072460,
62172424,62276270); Beijing Natural Science
Foundation (4212022); the Public Computing
Cloud at Renmin University of China.
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Ha[jishirzi. 2019. MathQA: Towards interpretable math](https://doi.org/10.18653/v1/N19-1245)
[word problem solving with operation-based for-](https://doi.org/10.18653/v1/N19-1245)
[malisms. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1245)
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
1745
-----
2357–2367, Minneapolis, Minnesota. Association for
Computational Linguistics.
Daniel G. Bobrow. 1964. Natural language input for a
computer problem solving system. Technical report,
USA.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens
Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack
Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. 2020.
[Language models are few-shot learners.](https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf) In Ad_vances in Neural Information Processing Systems,_
volume 33, pages 1877–1901. Curran Associates,
Inc.
Yixuan Cao, Feng Hong, Hongwei Li, and Ping Luo.
2021. A bottom-up dag structure extraction model
for math word problems. In AAAI Conference on
_Artificial Intelligence._
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts,
Paul Barham, Hyung Won Chung, Charles Sutton,
Sebastian Gehrmann, et al. 2022. Palm: Scaling
language modeling with pathways. arXiv preprint
_arXiv:2204.02311._
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho,
and Yoshua Bengio. 2014. Empirical evaluation of
gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](https://doi.org/10.48550/ARXIV.2110.14168)
[lems.](https://doi.org/10.48550/ARXIV.2110.14168)
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin
[Wang, and Guoping Hu. 2020. Revisiting pre-trained](https://www.aclweb.org/anthology/2020.findings-emnlp.58)
[models for Chinese natural language processing. In](https://www.aclweb.org/anthology/2020.findings-emnlp.58)
_Proceedings of the 2020 Conference on Empirical_
_Methods in Natural Language Processing: Findings,_
pages 657–668, Online. Association for Computational Linguistics.
Danqing Huang, Jing Liu, Chin-Yew Lin, and Jian Yin.
[2018. Neural math word problem solver with re-](https://aclanthology.org/C18-1018)
[inforcement learning. In Proceedings of the 27th](https://aclanthology.org/C18-1018)
_International Conference on Computational Linguis-_
_tics, pages 213–223, Santa Fe, New Mexico, USA._
Association for Computational Linguistics.
[Zhanming Jie, Jierui Li, and Wei Lu. 2022. Learning](https://doi.org/10.18653/v1/2022.acl-long.410)
[to reason deductively: Math word problem solving](https://doi.org/10.18653/v1/2022.acl-long.410)
[as complex relation extraction. In Proceedings of the](https://doi.org/10.18653/v1/2022.acl-long.410)
_60th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
5944–5955, Dublin, Ireland. Association for Computational Linguistics.
Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate
[Kushman, and Hannaneh Hajishirzi. 2016. MAWPS:](https://doi.org/10.18653/v1/N16-1136)
[A math word problem repository. In Proceedings of](https://doi.org/10.18653/v1/N16-1136)
_the 2016 Conference of the North American Chapter_
_of the Association for Computational Linguistics: Hu-_
_man Language Technologies, pages 1152–1157, San_
Diego, California. Association for Computational
Linguistics.
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
[Regina Barzilay. 2014. Learning to automatically](https://doi.org/10.3115/v1/P14-1026)
[solve algebra word problems. In Proceedings of the](https://doi.org/10.3115/v1/P14-1026)
_52nd Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
271–281, Baltimore, Maryland. Association for Computational Linguistics.
Yihuai Lan, Lei Wang, Qiyuan Zhang, Yunshi Lan,
Bing Tian Dai, Yan Wang, Dongxiang Zhang, and
[Ee-Peng Lim. 2022. Mwptoolkit: An open-source](https://doi.org/10.1609/aaai.v36i11.21723)
[framework for deep learning-based math word prob-](https://doi.org/10.1609/aaai.v36i11.21723)
[lem solvers. Proceedings of the AAAI Conference on](https://doi.org/10.1609/aaai.v36i11.21723)
_Artificial Intelligence, 36(11):13188–13190._
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang, Bing Tian
[Dai, and Dongxiang Zhang. 2019. Modeling intra-](https://doi.org/10.18653/v1/P19-1619)
[relation in math word problems with different func-](https://doi.org/10.18653/v1/P19-1619)
[tional multi-head attentions. In Proceedings of the](https://doi.org/10.18653/v1/P19-1619)
_57th Annual Meeting of the Association for Computa-_
_tional Linguistics, pages 6162–6167, Florence, Italy._
Association for Computational Linguistics.
Shucheng Li, Lingfei Wu, Shiwei Feng, Fangli Xu,
Fengyuan Xu, and Sheng Zhong. 2020. Graph-totree neural networks for learning structured inputoutput translation with applications to semantic parsing and math word problem. In Findings of the Asso_ciation for Computational Linguistics: EMNLP 2020,_
pages 2841–2852.
Zhongli Li, Wenxuan Zhang, Chao Yan, Qingyu Zhou,
[Chao Li, Hongzhi Liu, and Yunbo Cao. 2022. Seek-](https://doi.org/10.18653/v1/2022.findings-acl.195)
[ing patterns, not just memorizing procedures: Con-](https://doi.org/10.18653/v1/2022.findings-acl.195)
[trastive learning for solving math word problems.](https://doi.org/10.18653/v1/2022.findings-acl.195)
In Findings of the Association for Computational
_Linguistics: ACL 2022, pages 2486–2496, Dublin,_
Ireland. Association for Computational Linguistics.
Paul Pu Liang, Chiyu Wu, Louis-Philippe Morency, and
[Ruslan Salakhutdinov. 2021. Towards understanding](http://proceedings.mlr.press/v139/liang21a.html)
[and mitigating social biases in language models. In](http://proceedings.mlr.press/v139/liang21a.html)
_Proceedings of the 38th International Conference_
_on Machine Learning, ICML 2021, 18-24 July 2021,_
_Virtual Event, volume 139 of Proceedings of Machine_
_Learning Research, pages 6565–6576. PMLR._
Zhenwen Liang, Jipeng Zhang, Lei Wang, Wei Qin,
Yunshi Lan, Jie Shao, and Xiangliang Zhang. 2022.
[MWP-BERT: Numeracy-augmented pre-training for](https://doi.org/10.18653/v1/2022.findings-naacl.74)
[math word problem solving. In Findings of the Asso-](https://doi.org/10.18653/v1/2022.findings-naacl.74)
_ciation for Computational Linguistics: NAACL 2022,_
pages 997–1009, Seattle, United States. Association
for Computational Linguistics.
1746
-----
Christian Liguda and Thies Pfeiffer. 2012. Modeling
math word problems with augmented semantic networks. In Natural Language Processing and Infor_mation Systems, pages 247–252, Berlin, Heidelberg._
Springer Berlin Heidelberg.
Xin Lin, Zhenya Huang, Hongke Zhao, Enhong Chen,
[Qi Liu, Hao Wang, and Shijin Wang. 2021. Hms:](https://doi.org/10.1609/aaai.v35i5.16547)
[A hierarchical solver with dependency-enhanced un-](https://doi.org/10.1609/aaai.v35i5.16547)
[derstanding for math word problem. Proceedings](https://doi.org/10.1609/aaai.v35i5.16547)
_of the AAAI Conference on Artificial Intelligence,_
35(5):4232–4240.
Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey
Edunov, Marjan Ghazvininejad, Mike Lewis, and
[Luke Zettlemoyer. 2020. Multilingual denoising pre-](https://doi.org/10.1162/tacl_a_00343)
[training for neural machine translation. Trans. Assoc.](https://doi.org/10.1162/tacl_a_00343)
_Comput. Linguistics, 8:726–742._
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis,
Luke Zettlemoyer, and Veselin Stoyanov. 2019.
[Roberta: A robustly optimized bert pretraining ap-](http://arxiv.org/abs/1907.11692)
[proach. Cite arxiv:1907.11692.](http://arxiv.org/abs/1907.11692)
Mengqi Miao, Fandong Meng, Yijin Liu, Xiao-Hua
[Zhou, and Jie Zhou. 2021. Prevent the language](https://doi.org/10.18653/v1/2021.acl-long.268)
[model from being overconfident in neural machine](https://doi.org/10.18653/v1/2021.acl-long.268)
[translation. In Proceedings of the 59th Annual Meet-](https://doi.org/10.18653/v1/2021.acl-long.268)
_ing of the Association for Computational Linguistics_
_and the 11th International Joint Conference on Natu-_
_ral Language Processing (Volume 1: Long Papers),_
pages 3456–3468, Online. Association for Computational Linguistics.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
2021. Are nlp models really able to solve simple
math word problems? In Proceedings of the 2021
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094._
[Subhro Roy and Dan Roth. 2018. Mapping to declara-](https://doi.org/10.1162/tacl_a_00012)
[tive knowledge for word problem solving. Transac-](https://doi.org/10.1162/tacl_a_00012)
_tions of the Association for Computational Linguis-_
_tics, 6:159–172._
Torsten Scholak, Nathan Schucher, and Dzmitry Bah[danau. 2021. PICARD: Parsing incrementally for](https://doi.org/10.18653/v1/2021.emnlp-main.779)
[constrained auto-regressive decoding from language](https://doi.org/10.18653/v1/2021.emnlp-main.779)
[models. In Proceedings of the 2021 Conference on](https://doi.org/10.18653/v1/2021.emnlp-main.779)
_Empirical Methods in Natural Language Processing,_
pages 9895–9901, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Jianhao Shen, Yichun Yin, Lin Li, Lifeng Shang, Xin
[Jiang, Ming Zhang, and Qun Liu. 2021. Generate &](https://doi.org/10.18653/v1/2021.findings-emnlp.195)
[rank: A multi-task framework for math word prob-](https://doi.org/10.18653/v1/2021.findings-emnlp.195)
[lems. In Findings of the Association for Computa-](https://doi.org/10.18653/v1/2021.findings-emnlp.195)
_tional Linguistics: EMNLP 2021, pages 2269–2279,_
Punta Cana, Dominican Republic. Association for
Computational Linguistics.
[Yibin Shen and Cheqing Jin. 2020. Solving math word](https://doi.org/10.18653/v1/2020.coling-main.262)
[problems with multi-encoders and multi-decoders.](https://doi.org/10.18653/v1/2020.coling-main.262)
In Proceedings of the 28th International Conference
_on Computational Linguistics, pages 2924–2934,_
Barcelona, Spain (Online). International Committee
on Computational Linguistics.
Ryan Steed, Swetasudha Panda, Ari Kobren, and
[Michael L. Wick. 2022. Upstream mitigation is not](https://doi.org/10.18653/v1/2022.acl-long.247)
[all you need: Testing the bias transfer hypothesis in](https://doi.org/10.18653/v1/2022.acl-long.247)
[pre-trained language models. In Proceedings of the](https://doi.org/10.18653/v1/2022.acl-long.247)
_60th Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), ACL_
_2022, Dublin, Ireland, May 22-27, 2022, pages 3524–_
3542. Association for Computational Linguistics.
Minghuan Tan, Lei Wang, Lingxiao Jiang, and Jing
Jiang. 2021. Investigating math word problems using pretrained multilingual language models. arXiv
_preprint arXiv:2105.08928._
[Christoph Tillmann and Hermann Ney. 2003. Word](https://doi.org/10.1162/089120103321337458)
[reordering and a dynamic programming beam search](https://doi.org/10.1162/089120103321337458)
[algorithm for statistical machine translation. Compu-](https://doi.org/10.1162/089120103321337458)
_tational Linguistics, 29(1):97–133._
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. Advances in neural information processing
_systems, 30._
Deng-Bao Wang, Lei Feng, and Min-Ling Zhang. 2021.
[Rethinking calibration of deep neural networks: Do](https://proceedings.neurips.cc/paper/2021/hash/61f3a6dbc9120ea78ef75544826c814e-Abstract.html)
[not be afraid of overconfidence. In Advances in Neu-](https://proceedings.neurips.cc/paper/2021/hash/61f3a6dbc9120ea78ef75544826c814e-Abstract.html)
_ral Information Processing Systems 34: Annual Con-_
_ference on Neural Information Processing Systems_
_2021, NeurIPS 2021, December 6-14, 2021, virtual,_
pages 11809–11820.
Lei Wang, Dongxiang Zhang, Jipeng Zhang, Xing Xu,
Lianli Gao, Bing Tian Dai, and Heng Tao Shen. 2019.
[Template-based math word problem solvers with re-](https://doi.org/10.1609/aaai.v33i01.33017144)
[cursive neural networks. In Proceedings of the Thirty-](https://doi.org/10.1609/aaai.v33i01.33017144)
_Third AAAI Conference on Artificial Intelligence and_
_Thirty-First Innovative Applications of Artificial In-_
_telligence Conference and Ninth AAAI Symposium_
_on Educational Advances in Artificial Intelligence,_
AAAI’19/IAAI’19/EAAI’19. AAAI Press.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
[Deep neural solver for math word problems. In Pro-](https://doi.org/10.18653/v1/D17-1088)
_ceedings of the 2017 Conference on Empirical Meth-_
_ods in Natural Language Processing, pages 845–854,_
Copenhagen, Denmark. Association for Computational Linguistics.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and
[Denny Zhou. 2022. Chain of thought prompting](https://doi.org/10.48550/ARXIV.2201.11903)
[elicits reasoning in large language models.](https://doi.org/10.48550/ARXIV.2201.11903)
1747
-----
**A.2** **Datasets Statistics**
The statistics of datasets are presented in Table 6.
CMWPA is a synthetic English dataset with 1000
training samples and 100 test and validation samples. MAWPS and MathQA are public English
MWP datasets that contain 1.9K math problems
and 20K math problems, respectively. Math23K is
a public Chinese MWP dataset that contains 23K
math problems. We use the average number of operations to assess the difficulty of a MWP dataset. As
we can see, MAWPS is the simplest dataset because
almost all problems require only one or two operations. MathQA is the most challenging dataset,
requiring more operations and, hence, more steps in
the reasoning process to obtain the answer. SVAMP
is also a challenging dataset that is manually created to evaluate a model’s robustness. They apply
variations to the instances sampled from MAWPS.
Such variations could include adding extra quantities, swapping the positions between noun phrases,
etc.
**A.3** **Parsing and Execution**
Due to the existence of higher-order operators, the
way we calculate the answer is different from previous works. We implement the corresponding
solving function using Python for each pre-defined
operator, which is also included in our published
code. During inference, for the generated operation
sequence, we sequentially calculate the returned
quantities for each operation. Naturally, the returned quantities of the last operation denote the
answer to the problem. For a generated operation,
we first parse out its operator and several operands.
Then, we call the solving function corresponding
to the operator to obtain the returned quantities.
**A.4** **CMWPA Templates**
We show the templates corresponding to the advanced operators as follows.
Two templates for the [linear equation solver]
operator:
- Text description: [q0] * [o0] + [q1] * [o1] = [q4]; [q2] *
[o0] + [q3] * [o1] = [q5].
Operation: [linear equation solver] [q0] [q1] [q2] [q3]
[q4] [q5]
- Text description: Determine [o0], [o1] as the result of
inverse of matrix [ [ [q0], [q1] ], [ [q2], [q3] ] ] times
vector [ [q4], [q5] ].
Qinzhuo Wu, Qi Zhang, Jinlan Fu, and Xuanjing Huang.
[2020. A knowledge-aware sequence-to-tree network](https://doi.org/10.18653/v1/2020.emnlp-main.579)
[for math word problem solving. In Proceedings of](https://doi.org/10.18653/v1/2020.emnlp-main.579)
_the 2020 Conference on Empirical Methods in Nat-_
_ural Language Processing (EMNLP), pages 7137–_
7146, Online. Association for Computational Linguistics.
Zhipeng Xie and Shichao Sun. 2019. A goal-driven
tree-structured neural model for math word problems.
In IJCAI, pages 5299–5305.
Jipeng Zhang, Lei Wang, Roy Ka-Wei Lee, Yi Bin, Yan
[Wang, Jie Shao, and Ee-Peng Lim. 2020. Graph-to-](https://doi.org/10.18653/v1/2020.acl-main.362)
[tree learning for solving math word problems. In](https://doi.org/10.18653/v1/2020.acl-main.362)
_Proceedings of the 58th Annual Meeting of the Asso-_
_ciation for Computational Linguistics, pages 3928–_
3937, Online. Association for Computational Linguistics.
Shiyue Zhang, Benjamin Frey, and Mohit Bansal. 2021.
[Chrentranslate: Cherokee-english machine transla-](https://doi.org/10.18653/v1/2021.acl-demo.33)
[tion demo with quality estimation and corrective feed-](https://doi.org/10.18653/v1/2021.acl-demo.33)
[back. In Proceedings of the Joint Conference of the](https://doi.org/10.18653/v1/2021.acl-demo.33)
_59th Annual Meeting of the Association for Compu-_
_tational Linguistics and the 11th International Joint_
_Conference on Natural Language Processing, ACL_
_2021 - System Demonstrations, Online, August 1-6,_
_2021, pages 272–279. Association for Computational_
Linguistics.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel
Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022.
Opt: Open pre-trained transformer language models.
_arXiv preprint arXiv:2205.01068._
**A** **Appendix**
**A.1** **Complexity Analysis**
Consider a math problem that has n words in the
problem description and |O| operations in the solving process. A total of κ words are needed to
describe the _O_ operations, i.e., κ = _t=1_
GeDe needs to perform | _|_ _|O| operation re-encode[|][o][t][|][.]_
steps and κ token generation steps. For the[P][|][O][|] τ -th
re-encode step, its computational complexity is
_O((n +_ _t=1_
in o[τ], its computational complexity is[|][o][t][|][)][2][)][. For generating the tokens] O(|o[τ] _| ∗_
(n + _t[P]=1[τ]_ _[−][1]_ Therefore, the overall time
_[|][o][t][|][))][.]_
complexity is _τ_ =1[(][n][ +][ P][τ]t=1[−][1]
(n + [P]t[τ]=1[−][1] _[|][o][t][|][)][2][) +][ |][o][τ]_ _[| ∗]_
If we use the unidirectional attention model as[|][o][t][P][|][)][ < O][|][O][|] [(][|][O][| ∗] _[n][2][ +][ κ][ ∗]_ [(][n][ +][ κ][))][.]
re-encoder, the complexity can be lowered to[P][τ] _[−][1]_
_O(n[2]_ + _κ_ _∗_ (n + _κ)), which is the same as what the_
current seq2seq generation methods achieve. The
additional time complexity is acceptable because
_|O| is typically not very large._
1748
-----
Operation: [linear equation solver] [q0] [q1] [q2] [q3]
[q4] [q5]
One template for the [quadratic function integral
solver] operator:
- Text description: Determine [o0] as the definite integral of quadratic function [q0] * xˆ2 + [q1] * x + [q2]
between the intervals [q3] and [q4].
Operation: [quadratic function integral solver] [q3] [q4]
[q0] [q1] [q2]
One template for the [quadratic function extremum solver] operator:
- Text description: Determine [o0] as the the extremum
value of quadratic function [q0] * xˆ2 + [q1] * x + [q2].
Operation: [quadratic function extremum solver] [q0]
[q1] [q2]
Based on the templates, we generate a sample
as follows. First, we randomly initialize a candidate set of quantities. Then, we randomly select
a template and fill in slots by randomly selecting
quantities from the quantity candidate set. We reinput the returned quantities of the operation into
the candidate set and repeat the above process several times. In this way, a problem description and
its operation sequence are generated. We also convert the operation sequence into a pre-order binary
expression as another type of annotation for training the seq2seq baseline.
**A.5** **CMWPA Example**
We provide a sample of CMWPA in Table 5.
This sample is initialized with 6 quantities and
involves four types of operators: the subtraction
operator, the [linear equation solver] operator, the
[quadratic function integral solver] operator, and
the [quadratic function extremum solver] operator. Two types of annotations are provided: the
multivariant operation sequence and the pre-order
binary expression (pre-order binary expression can
be transformed into a binary operation sequence
(bDAG) or a binary expression tree). For each operation in the multivariant operation sequence, we
provide the operation, the input quantities, and the
returned output quantities.
1749
-----
**Problem:**
Given [q0] = 0.23 . [q1] = 0.43 . [q2] = 0.18 . [q3] = 0.26 . [q4] = 0.71 . [q5] = 0.85 . Determine [q6] as the
[q4] minus [q5] . Determine [q7] [q8] as the result of inverse of matrix [ [ [q4], [q3] ], [ [q2], [q5] ] ] times
vector [ [q0], [q6] ] . Determine [q9] as the definite integral of quadratic function [q6] * x[2] + [q7] * x + [q5]
between the intervals [q1] and [q8] . Determine [q10] as the the extremum value of quadratic function [q8] *
_x[2]_ + [q6] * x + [q9] . Output the value of [q10] .
**Multivariant Operation Sequence:**
1. operation1: [-, [QTT4], [QTT5]]
returned quantities of operation1: [[QTT6]]
2. operation2: [[linear equation solver], [QTT4], [QTT3], [QTT2], [QTT5], [QTT0], [QTT6]]
returned quantities of operation2: [[QTT7], [QTT8]]
3. operation3: [[quadratic function integral solver], [QTT1], [QTT8], [QTT6], [QTT7], [QTT5]]
returned quantities of operation3: [[QTT9]]
4. operation4: [[quadratic function extremum solver], [QTT8], [QTT6], [QTT9]]
returned quantities of operation4: [[QTT10]]
**Pre-order binary expression:**
+, *, /, -, *, [QTT4], [QTT0], *, [QTT2], -, [QTT4], [QTT5], -, *, [QTT4], [QTT5], *, [QTT2],
[QTT3], ˆ, *, [c3], /, -, [QTT4], [QTT5], *, [c1], /, -, *, [QTT4], [QTT0], *, [QTT2], -, [QTT4],
[QTT5], -, *, [QTT4], [QTT5], *, [QTT2], [QTT3], [c1], +, *, -, [QTT4], [QTT5], *, [c3], /, -,
[QTT4], [QTT5], *, [c1], /, -, *, [QTT4], [QTT0], *, [QTT2], -, [QTT4], [QTT5], -, *, [QTT4],
[QTT5], *, [QTT2], [QTT3], -, +, *, /, -, [QTT4], [QTT5], [c2], ˆ, /, -, *, [QTT4], [QTT0], *,
[QTT2], -, [QTT4], [QTT5], -, *, [QTT4], [QTT5], *, [QTT2], [QTT3], [c2], +, *, /, /, -, *, [QTT5]
, [QTT0], *, [QTT3], -, [QTT4], [QTT5], -, *, [QTT4], [QTT5], *, [QTT2], [QTT3], [c1], ˆ, /, -, *,
[QTT4], [QTT0], *, [QTT2], -, [QTT4], [QTT5], -, *, [QTT4], [QTT5], *, [QTT2], [QTT3], [c1], *,
[QTT5], /, -, *, [QTT4], [QTT0], *, [QTT2], -, [QTT4], [QTT5], -, *, [QTT4], [QTT5], *, [QTT2]
, [QTT3], +, *, /, -, [QTT4], [QTT5], [c2], ˆ, [QTT1], [c2], +, *, /, /, -, *, [QTT5], [QTT0], *,
[QTT3], -, [QTT4], [QTT5], -, *, [QTT4], [QTT5], *, [QTT2], [QTT3], [c1], ˆ, [QTT1], [c1], *,
[QTT5], [QTT1]
Table 5: A sample of CMWPA. [QTTi] represents the i-th quantity, [c1], [c2], and [c3] represent three constants 1,
2, and 3 respectively.
Dataset #Train #Dev #Test Avg. #Operations Avg. PDL Operation Types Language
CMWPA 1,000 100 100 2.98 329.55 Basic & Advanced English
MAWPS 1,589 199 199 1.41 299.31 Basic English
Math23k 21,162 1,000 1,000 2.27 156.28 Basic Chinese
MathQA 16,191 2,411 1,605 4.25 374.89 Basic English
SVAMP 3,138 - 1,000 1.3 159.6 Basic English
Table 6: Detailed statistics of all datasets. PDL means problem description length.
1750
-----
| [
"Jing, Zhang",
"Hong, Chen",
"Yuxuan, Hu",
"Haoyang, Li",
"Cuiping, Li",
"Houda, Bouamor",
"Juan, Pino",
"Kalika, Bali"
] | 2023-12-01T00:00:00 | EMNLP 2023 Main | true | 0 | 0 | null | https://aclanthology.org/2023.emnlp-main.108 | null | https://www.semanticscholar.org/paper/2f9221877030c28cf98f0847ff8b8e787377b9a6 |
A Language-Agent Approach to Formal Theorem-Proving | Language agents, which use a large language model (LLM) capable of in-context learning to interact with an external environment, have recently emerged as a promising approach to control tasks. We present the first language-agent approach to formal theorem-proving. Our method, COPRA, uses a high-capacity, black-box LLM (GPT-4) as part of a policy for a stateful backtracking search. During the search, the policy can select proof tactics and retrieve lemmas and definitions from an external database. Each selected tactic is executed in the underlying proof framework, and the execution feedback is used to build the prompt for the next policy invocation. The search also tracks selected information from its history and uses it to reduce hallucinations and unnecessary LLM queries. We evaluate COPRA on the miniF2F benchmark for Lean and a set of Coq tasks from the Compcert project. On these benchmarks, COPRA is significantly better than one-shot invocations of GPT-4, as well as state-of-the-art models fine-tuned on proof data, at finding correct proofs quickly. | null | ## A LANGUAGE-AGENT APPROACH TO FORMAL THEOREM-PROVING
**Amitayush Thakur, Yeming Wen & Swarat Chaudhuri**
The University of Texas at Austin
_{amitayush, ywen}@utexas.edu, [email protected]_
ABSTRACT
Language agents, which use a large language model (LLM) capable of in-context
learning to interact with an external environment, have recently emerged as a
promising approach to control tasks. We present the first language-agent approach
to formal theorem-proving. Our method, COPRA, uses a high-capacity, black-box
LLM (GPT-4) as part of a policy for a stateful backtracking search. During the
search, the policy can select proof tactics and retrieve lemmas and definitions from
an external database. Each selected tactic is executed in the underlying proof
framework, and the execution feedback is used to build the prompt for the next
policy invocation. The search also tracks selected information from its history
and uses it to reduce hallucinations and unnecessary LLM queries.
We evaluate COPRA on the miniF2F benchmark for Lean and a set of Coq tasks
from the Compcert project. On these benchmarks, COPRA is significantly better
than one-shot invocations of GPT-4, as well as state-of-the-art models fine-tuned
on proof data, at finding correct proofs quickly.
1 INTRODUCTION
Automatically proving formal theorems (Newell et al., 1957) is a longstanding challenge in computer science. Autoregressive language models (Polu & Sutskever, 2020; Han et al., 2021; Yang
et al., 2023) have recently emerged as an effective approach to this problem. Such models are
trained on proofs written in frameworks like Coq (Huet et al., 1997) or Lean (de Moura et al., 2015),
which allows proof goals to be iteratively simplified using a set of tactics. Theorem-proving then
amounts to generating a sequence of tactics that iteratively “discharges” a given proof goal.
A weakness of this method is that it does not model the interaction between the model and the underlying proof framework. The application of a tactic is an action that changes the state of the proof
and the interpretation of future tactics. By ignoring these game-like dynamics, autoregressive models miss out on a valuable source of feedback and end up being more susceptible to hallucinations.
In this paper, we show that the nascent paradigm of large-language-model (LLM) agents (Yao et al.,
2022; Wang et al., 2023; Shinn et al., 2023) can help address this weakness. Here, one uses an LLM
as a agent that interacts with an external environment. Information gathered through interaction is
used to update the LLM’s prompt, eliciting new agent behavior because of in-context learning.
Our approach, called COPRA[1] (Figure 1), uses an off-the-shelf, high-capacity LLM (GPT-4 (OpenAI, 2023)) as part of a policy in that interacts with a proof environment like Coq or Lean. At each
time step, the policy consumes a textual prompt and chooses to use an available tactic, or backtrack,
or retrieve relevant lemmas and definitions from an external corpus. When the policy selects a tactic, we “execute” it using the underlying proof assistant. The feedback from the execution is used to
construct a new prompt for the policy, and the process repeats.
COPRA goes beyond prior language-agent methods in using domain knowledge and information
from the search history to use LLM queries frugally. When tactics fail, the policy records this
information and uses it to avoid future failures. The policy also has access to a symbolic procedure
1COPRA is an acronym for “In-context Prover Agent”.
-----
Figure 1: An overview of COPRA. The system implements a policy that interacts with a proof
environment (Coq or Lean). Internally, a COPRA policy consists of an LLM (GPT-4), a stackbased backtracking search, a retrieval mechanism, a dictionary tracking past failures, and a prompt
serialization protocol that constructs LLM prompts using the stack and environment feedback and
parse LLM outputs into actions.
that checks if one goal is “simpler” than another. A tactic is only used when it simplifies the agent’s
proof obligations (ruling out, among other things, cyclic tactic sequences).
We have integrated COPRA with both the Coq and the Lean environments. We evaluate the system
using the miniF2F (Zheng et al., 2021) benchmark for competition-level mathematical reasoning
in Lean and a set of Coq proof tasks (Sanchez-Stern et al., 2020) from the Compcert (Leroy, 2009)
project on verified compilation. Using a new metric called prove-at-k-inferences, we show that
COPRA can converge to correct proofs faster than competing approaches, including the state-of-theart models (Yang et al., 2023; Sanchez-Stern et al., 2020) trained on formal proof data. We also
show that when COPRA fails, it fails quicker than the baseline methods.
To summarize our contributions, we offer: (i) The first approach to formal theorem-proving that
leverages LLMs while also modeling interactions between the model and the underlying proof
framework; (ii) the first language agent, from any domain, to integrate LLM policies with a search
that minimizes LLM queries and hallucinations by tracking domain-specific information from the
past; and (iii) an implementation of COPRA that interacts with the Coq and Lean proof environments,
and an evaluation on two domains — mathematics competition problems and formal verification —
that shows COPRA to find proofs faster than competing approaches.
2 THEOREM-PROVING AS A CONTROL PROBLEM
2.1 BACKGROUND ON THEOREM-PROVING
A formal proof starts with a set of unmet obligations stated in a formal language and applies a
sequence of proof tactics to progressively eliminate these obligations. Each obligation o consists of
a goal g and a hypothesis h. The goal g consists of the propositions that need to be proved in order
-----
to meet o; the hypothesis h captures assumptions that can be made in the proof of g. The prover’s
long-term objective is to reduce the obligations to the empty set.
We illustrate this process with the example in Figure 2-(a). This example
shows a Lean (de Moura et al., 2015)
proof, automatically generated using
COPRA, of a basic theorem about modular arithmetic. The proof first applies
the intro tactic, which changes a goal
_P →_ _Q to a hypothesis P and a goal_
_Q. Next, it applies the rw (rewrite) tac-_
tic, which gives a way to apply substitutions to goals and hypotheses, several times. It ends with the application
of the refl (reflexivity) tactic, which
eliminates goals that say that a value is
equal to itself.
(a)
theorem mod_arith_2
(x : N) : x % 2 = 0
:=→ (x * x) % 2 = 0
begin
intro h,
rw nat.mul_mod,
rw h,
rw nat.zero_mul,
refl,
end
(b)
(c)
begin
intro h,
have h1 : x = 2 * (x
/ 2)
:= (nat.
mul_div_cancel' h)
.symm,
rw h1,
rw nat.mul_div_assoc
(dvd_refl _),show 2 | 2, from
rw [mul_assoc, nat.
mul_mod_right],
end
equal to itself. x: N mul_mod_right],
h: x % 2 = 0
Existing LLM-based approaches to au-tomatic theorem-proving view such _⊢_ x * x % 2 = 0 end
proofs as purely syntactic artifacts.
However, the rigorous semantics of
Figure 2: (a) A Lean theorem and a correct proof found
proofs can be difficult to learn using
by COPRA. (b) Proof state after the first tactic. (c) An
such an approach, leading to the gener
incorrect proof generated by GPT-4.
ation of incorrect proofs. Figure 2-(c)
shows a GPT-4-generated incorrect proof of our theorem.
2.2 A MARKOV DECISION PROCESS FORMULATION
By contrast, COPRA is based on a view of automatic theorem-proving as a control problem. Like
prior work on reinforcement learning (RL) for proof synthesis (Wu et al., 2021), we view a theoremprover as a policy that interacts with a stateful proof environment (e.g., Lean) and model the interaction between the policy and the environment as a deterministic Markov Decision Process (MDP). We
depart from prior RL-based work for theorem-proving by imposing a partial order on MDP states,
allowing rewards to have a textual component, and allowing history-dependent policies.
Now we describe the different components of our proof MDP.
**States. As before, let an obligation be a pair (g, h), where g is a goal and h a hypothesis. A state**
of the MDP is either a special symbol called error or a set O = _o1, . . ., ok_ of obligations oi.
_{_ _}_
The MDP has a unique initial state oin with a single obligation (gin _, hin_ ), where the goal gin and
the hypothesis hin are extracted from the user-provided theorem that we are trying to prove. Its
unique final state QED is the empty obligation set.
Following Sanchez-Stern et al. (2020), we define a partial order ⊑ over states that defines when a
state is “at least as hard” than another and use it to avoid actions that do not lead to progress in the
proof. Formally, for states O1 and O2 with O1 ̸= error and O2 ̸= error, O1 ⊑ _O2 iff_
_∀_ _oi = (gi, hi) ∈_ _O1. ∃ok = (gk, hk) ∈_ _O2. gk = gi ∧_ (hk → _hi)._
Intuitively,we have an efficient symbolic procedure that can check this relationship for any pair of states. The O1 ⊑ _O2 if for every obligation in O1, there is a stronger obligation in O2. We assume_
procedure is sound, meaning that if it reports O1 _O2, the relationship actually holds. However, it_
is incomplete, i.e., it may not detect all relationships of the form ⊑ _O1_ _O2._
_⊑_
**Actions and Transitions. The actions in our MDP are the proof environment’s tactics.**
The transition function T (O, a) determines the result of applying an action a to a state O. When a is
a tactic, we assume the underlying proof environment to return a state O[′] that results from applying
_a to O. If a is a “bad” tactic, then O[′]_ equals error ; otherwise, O[′] is a new set of obligations. We
assume that our agent can evaluate T (O, a) for any state O and action a. While this assumption is
unacceptable in many MDP problems, it is reasonable in the theorem-proving setting.
-----
**Rewards. As usual, we assume a reward function R(O, a) that evaluates an action a at a state O.**
Historically, such functions are scalar-valued; however, because we use LLMs as policies, we allow
rewards to also include rich textual feedback from the proof environment. Concretely, we consider
rewards of the form R(O, a) = (˜r, w), where:
(1) ˜r is a very high positive value if T (O, a) = QED, a negative value if T (O, a) = error, and 0
otherwise, and (2) w is the feedback from the proof environment when a is executed from O.
**Histories and Policies. A history of length N is a sequence**
_h = ⟨(O0, a0, O0[′]_ _[, r][0][)][,][ (][O][1][, a][1][, O]1[′]_ _[, r][1][)][, . . .,][ (][O][N]_ _[−][1][, a][N]_ _[−][1][, O]N[′]_ _[, r][N]_ [)][⟩]
such that O0 = Oin and for all i, ri = R(Oi, ai) and Oi[′] [=][ T] [(][O][i][, a][i][)][. Intuitively, a history records]
the interactions between the prover agent and the proof environment up to a point of time. We denote
by hi the i-th prefix of h. For example, h0 = ⟨⟩, h1 = ⟨(O0, a0, O0[′] _[, r][0][)][⟩][, and so on.]_
A policy is a probabilistic function π that maps histories to distributions over pairs (O, a), where O
is a state and a is an action. Intuitively, at each point, the policy determines the next query to make
to the proof environment.
A policy can have an internal state as well as access to external knowledge (specifically, a lemma
database). A trajectory of a policy π is a history h as above such that for each i,
**Pr[π(hi) = (Oi, ai)] > 0.**
Letting each ri = (˜ri, wi), the scalar reward from a trajectory is simply the average _N[1]_ _i_ _r[˜]i. We_
define the aggregate (scalar) reward of π as the expected scalar reward from trajectories sampled
from π. P
**Language Agents. Given our setup, one can naturally pose the problem of reinforcement-learning**
a policy with optimal aggregate reward. In this paper, we do not take on this problem. Instead, we
consider a fixed policy — a wrapper around a pretrained LLM (GPT-4) that can learn in-context
— and show that this policy can achieve a high reward. It is this policy that defines our language
_agent._
3 THE COPRA AGENT
A COPRA policy has access to an LLM
(in practice, GPT-4) and performs a
depth-first search. During the search,
it records information about failed actions. It also uses the ⊑ relation
over states to checks that it is making
progress on the proof.
Figure 3 shows pseudocode for such a
policy. The policy maintains a stack of
MDP states and a “failure dictionary”
_Bad that maps a state to a set of actions_
that are known to be “unproductive” at
the state. At each search step, the algorithm pushes the current state on the
stack and retrieves external lemmas and
definitions relevant to the state. After
this, it repeatedly serializes the stack
and Bad (O) into a prompt and feeds
it to the LLM. The LLM’s output is
parsed into an action, and the agent executes it in the environment.
One outcome of the action could be that
the agent arrives at QED. Alternatively,
the new state could be an error or represent obligations that are at least as hard
COPRA(O)
1 PUSH(st, O)
2 _ρ ←_ RETRIEVE(O)
3 **for j ←** 1 to k
4 **do p ←** PROMPTIFY(st, Bad (O), ρ, r)
5 _a ∼_ PARSEACTION(LLM(p))
6 _O[′]_ _←_ _T_ (O, a), r ← _R(O, a)_
7 **if O[′]** = QED
8 **then terminate successfully**
9 **else if O[′]** = error or
_∃O[′′]_ _∈_ _st. O[′′]_ _⊑_ _O[′]_
10 **then add a to Bad** (O)
11 **else COPRA(O[′])**
12 POP(st)
Figure 3: The search procedure in COPRA. T is the environment’s transition function and R is the reward function. st is a stack, initialized to be empty. Bad (O) is a
set of actions, initialized to ∅, that are known to be bad
at O. LLM is an LLM, PROMPTIFY generates a prompt,
PARSEACTION parses the output of the LLM into an action (repeatedly querying the LLM in case there are formatting errors in its output), and RETRIEVE gathers relevant lemmas and definitions from an external source. The
procedure is initially called with argument Oin .
-----
Figure 4: The prompt serialization protocol. We highlight the different parts of the prompts to show
how we use the state stack and the textual reward from the environment.
as what is currently on the stack (for
example, this could be because of a cycle in a tactic). In this case, the agent rejects the new state.
Otherwise, it recursively continues the proof from the new state. After issuing a few queries to the
LLM, the agent backtracks.
**Prompt Serialization Protocol. The routines PROMPTIFY and PARSEACTION together constitute**
the prompt serialization protocol and are critical to the success of the policy. Now we elaborate on
these procedures.
PROMPTIFY carefully places the different pieces of information relevant to the proof in the prompt.
It also includes logic for trimming this information to fit the most relevant parts in the LLM’s context
window. Every prompt has two parts: the “system prompt” and the “agent prompt”.
The agent prompts are synthetically generated using a context-free grammar and contain information
about the state stack (including the current proof state), the textual reward for the previous action,
and the set of actions we know to avoid at the current proof state.
The system prompt describes the rules of engagement for the LLM. It contains a grammar (distinct
from the one for agent prompts) that we expect the LLMs to follow when it proposes a course of
action. The grammar carefully incorporates cases when the response is incomplete because of the
LLM’s token limits. We parse partial responses to extract the next action using the PARSEACTION
routine. PARSEACTION also identifies formatting errors (if any) in the LLM’s responses, possibly
communicating with the LLM multiple times until these errors are resolved.
**Example.** Figure 4 illustrates the prompt serialization protocol at work during the generation of
the proof in Figure 2-(b). Seq #1-#4 represent distinct invocations of the LLM. In each invocation,
PROMPTIFY first generates the “agent prompt,” which consists of three parts. The first part (“state”)
is simply a serialization of the current proof state. The second (“stack”) incorporates information
about previous actions as well as the bad actions for the current proof state. The third (“reward”)
encodes the feedback from the environment regarding the success or failure of the last action. The
response of the LLM to this prompt is then translated into an action using PARSEACTION. This
action is then executed on the theorem prover.
-----
4 EVALUATION
Our findings about COPRA are that: (i) the approach can find proofs significantly quicker than the
state-of-the-art finetuning-based baselines, both in terms of number of LLM queries and wall-clock
time; (ii) in problems where all current methods fail, COPRA fails faster; (iii) the use of GPT-4, as
opposed to GPT-3.5, within the agent is essential for success; and (iv) backtracking significantly
improves the system’s performance on harder problems. Now we elaborate on our experimental
methodology and these results.
**Implementing COPRA. Our implementation of COPRA**
has GPT-4 as the underlying LLM and can interact with
both the Lean and the Coq proof environments. Because
of the substantial cost of GPT-4 queries, we cap the number of LLM queries that COPRA can make by 60. To further reduce costs, COPRA first tries to prove its theorems
via a single LLM query (one-shot prompting). It only
invokes its agent behavior when the one-shot prompting
fails to find a proof.
The “system prompt” in the one-shot approach is slightly
different than that for COPRA, containing instructions to
generate a proof in one go rather than step by step. For
both COPRA and the one-shot baselines, the prompt contains a single proof example that clarifies how proofs need
to be formatted. This proof example remains the same for
Figure 5: COPRA vs. REPROVER on the all test cases.
```
miniF2F benchmark
```
**Benchmarks.** We evaluate our approach on two domains: (i) miniF2F (Zheng et al., 2021), a collection of
244 Lean formalizations of mathematics competition problems, solved using a range of techniques
such as induction, algebraic manipulation, and contradiction; and (ii) a set of Coq problems from
the CompCert compiler verification project (Leroy, 2009) that was previously used to evaluate the
PROVERBOT9001 system Sanchez-Stern et al. (2020).
**Baselines.** We compare with one-shot invocations of
GPT-3.5 and GPT-4 in both the miniF2F and the Compcert domains. We also consider an ablation of COPRA
that uses GPT-3.5 as its LLM and another that does not
use backtracking. As for fine-tuned baselines, a challenge
is that all existing open-source theorem-proving systems
only target a single proof environment. As a result, we
had to choose different baselines for the Lean (miniF2F)
and Coq (Compcert) domains.
Our fine-tuned baseline for the miniF2F domain is REPROVER, a state-of-the-art open-source prover that is part
of the Leandojo project (Yang et al., 2023). A challenge
with this baseline is that like COPRA, it uses a retrieval
mechanism. However, building a comparable retriever
for COPRA would require an indexed training corpus on
Figure 6: COPRA vs. PROVER- problems relevant to miniF2F. However, miniF2F is
BOT9001 on the Compcert benchmark only an evaluation set and does not come with a training
corpus. As a result, for an apples-to-apples comparison,
our evaluation on miniF2F turns off COPRA’s and REPROVER’s retrievers.
In the Compcert domain, we compare with PROVERBOT9001 (Sanchez-Stern et al., 2020), which,
while not LLM-based, is the best publicly available model for Coq. Unlike miniF2F, this benchmark
comes with a large training set as well as a test set, and we use the training set for retrieving relevant
lemmas and definitions. Our retrieval mechanism, in this case, is a simple BM25 search.
-----
|Approach|# Theorems proved /# Theorems|% proved|Avg. Inferences in Total|Avg. Inferences on Failure|Avg. Inferences on Pass|Max. Inferences Allowed|
|---|---|---|---|---|---|---|
|miniF2F Test Dataset|||||||
|GPT 3.5 One-Shot GPT 4 One-Shot COPRA (GPT-3.5) ReProver COPRA (GPT-4)|7/244 26/244 29/244 54/244 57/244|2.8% 10.6% 11.89% 22.13% 23.36%|1 1 12.83 350.7 20.94|1 1 14.23 427.24 26.79|1 1 2.45 81.6 1.75|1 1 60 1076 60|
|CompCert Test Dataset|||||||
|GPT 3.5 One-Shot GPT 4 One-Shot Proverbot COPRA|10/118 36/118 98/118 76/118|8.47% 30.51% 83.05% 64.41%|1 1 184.7 12.9|1 1 256.8 10.9|1 1 170.0 16.57|1 1 2344 60|
Table 1: Aggregate statistics for COPRA and the baselines on miniF2F and Compcert
|Approach|Avg. Time In Seconds|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||Per Proof|||Per Inference|||
||On Pass|On Fail|All|On Pass|On Fail|All|
|ReProver (on CPU) ReProver (on GPU) COPRA (GPT-3.5) COPRA (GPT-4)|279.19 267.94 39.13 30.21|618.97 601.35 134.26 191.73|543.78 520.74 122.21 140.86|3.42 2.06 15.97 17.26|1.45 0.44 9.43 7.16|1.55 0.48 9.53 6.73|
Table 2: Average time taken by our approach (COPRA) and ReProver on miniF2F dataset.
For cost reasons, our evaluation for Compcert uses 118 out the 501 theorems used in the original
evaluation of PROVERBOT9001 Sanchez-Stern et al. (2020). For fairness, we include all the 98 theorems proved by PROVERBOT9001 in our subset. The remaining theorems are randomly sampled.
**Metric: pass@k-inferences. The standard metric for evaluating theorem-provers is pass@k (Lam-**
ple et al., 2022; Yang et al., 2023). In this metric, a prover is given a budget of k proof attempts; the
method is considered successful if one of these attempts leads to success. However, a key objective
of our research is to discover proofs quickly, with fewer LLM queries and lower wall-clock time.
The pass@k metric does not evaluate this characteristic as it does not quantify the number of LLM
queries or amount of time needed by a proof attempt.
# Theorems Figure 6 shows a comparison between CO
To address this concern, we introduce a
new metric, pass@k-inferences, and evaluate COPRA and its competitors using this
metric. Here, we measure the number of
correct proofs that a prover can generate
with a budget of k or fewer LLM infer_ence queries. One challenge here is that we_
want this metric to be correlated number
of correct proofs that the prover produces
within a wall-clock time budget; however,
the cost of an inference query is propor
|Approach|# Theorems proved /# Theorems|% proved|
|---|---|---|
|miniF2F Test Dataset|||
|COPRA (GPT-4) w/o backtracking COPRA (GPT-4)|56/244 57/244|22.95% 23.36%|
|CompCert Test Dataset|||
|COPRA (GPT-4) w/o backtracking COPRA (GPT-4)|52/118 76/118|44.06% 64.41%|
Table 3: The effectiveness of backtracking
-----
tional to the number of responses generated per query. To maintain the correlation between the
number of inference queries and wall-clock time, we restrict each inference on LLM to a single
response.
**Results.** Figure 5 shows our comparison results for the miniF2F domain. As we see, COPRA
outperforms REPROVER, completing, within just 60 inferences, problems that REPROVER could
not solve even after a thousand inferences. This is remarkable given that COPRA is based on a
black-box foundation model and REPROVER was fine-tuned for at least a week on a dataset derived
from Lean’s Mathlib library. For fairness, we ran REPROVER multiple times with 16, 32, and 64
(default) as the maximum number of inferences per proof step. We obtained success rates of 15.9%,
20.1%, and 22.13% in the respective cases and took the best for comparison.
We find that COPRA is significantly faster than PROVERBOT9001. Since we put a cap of 60 inferences on COPRA, it cannot prove all the theorems that PROVERBOT9001 eventually proves. However, as shown in the figure, COPRA proves many more theorems than PROVERBOT9001 if only
60 inferences as allowed. Specifically, we prove 77.5% of all proofs found by PROVERBOT9001 in
less than 60 steps.
Aggregate statistics for the two approaches, as well as a comparison with the one-shot GPT-3.5 and
GPT-4 baselines, appear in Table 1. It is clear from this data that the language-agent approach offers
a significant advantage over the one-shot approach. For example, COPRA solves more than twice as
many problems as the one-shot GPT-4 baseline, which indicates that it does not just rely on GPT-4
recalling the proof from its memory. Also, the use of GPT-4 as opposed to GPT-3.5 seems essential.
We establish the correlation between the number of inferences needed for a proof and wall-clock
time in Table 2. Although the average time per inference is higher for COPRA, COPRA still finds
proofs almost 9x faster than REPROVER. This can explained by the fact that our search is more
effective as it uses 46x fewer inferences than REPROVER. These inference steps not only contain
the average time spent on generating responses from LLM but at times have some contribution
corresponding to the execution of the tactic on the Lean environment itself.
Table 2 also offers data on when the different approaches report failures. Since REPROVER uses a timeout for all theorems,
we also use a timeout of 11 minutes while
considering failures in Table 2. The data
indicates that COPRA is comparatively better at giving up when the problem is too
hard to solve. We also note that less time
is spent per inference in case of failure for
all approaches.
We show the impact of ablating the backtracking feature of COPRA in Table 3. We
note that backtracking has a greater positive impact in the Compcert domain. We
believe this is because the Compcert problems are more complex and backtracking
helps more when the proofs are longer.
theorem algebra_sqineq_at2malt1
(a : R) :
abegin * (2 - a) ≤ 1 :=
havefrom h : λ x, pow_two_nonneg (1 - x), ∀ (x : R), 0 ≤ (1 - x) ˆ 2,
calc a * (2 - a)
= 1 - (1 - a) ˆ 2 : by ring
... ≤ 1 : sub_le_self _ (h a),
end
Figure 7: A theorem in the “algebra” category that COPRA could prove but REPROVER could not.
Finally, we offer an analysis of the different categories of miniF2F problems solved by COPRA
and REPROVER in Figure 8. We see that certain kinds of problems, for example, International
Mathematics Olympiad (IMO) problems and theorems that require induction, are difficult for all
approaches. However, Figure 8b shows that COPRA takes fewer steps consistently across various
categories of problems in miniF2F.
From our qualitative analysis, there are certain kinds of problems where the language-agent approach
seems especially helpful. For instance, Figure 7 shows a problem
in the ‘algebra’ category that REPROVER could not solve. More examples of interesting Coq and
Lean proofs that COPRA found appear in the appendix.
-----
(a) Problems solved in different categories (b) Number of inferences in different categories
Figure 8: Breakdown of theorems proved in various categories
5 RELATED WORK
**Supervised Learning for Theorem-Proving.** There is a sizeable literature on search-based
theorem-proving techniques based on supervised learning. These methods train a model to predict the next proof step at each point in a proof. This model is then used to guide a search technique,
e.g., best-first or depth-limited search, that synthesizes a proof. Earlier methods of this sort used
small-scale neural networks (Yang & Deng, 2019; Sanchez-Stern et al., 2020; Huang et al., 2019)
as predictors. More recent methods, such as GPT-f (Polu & Sutskever, 2020), PACT (Han et al.,
2021), HyperTree Proof Search (Lample et al., 2022), and REPROVER (Yang et al., 2023), have
used LLMs. COPRA has some resemblance with the latter approaches. However, it departs from
these prior methods in using execution feedback and a more sophisticated search algorithm.
The recent Draft-Sketch-Proof (Jiang et al., 2022) method relies on informal proofs to generate
formal proofs.
Other methods like Baldur (First et al., 2023) generate the whole proof in one shot using an LLM
and then repair it. The main ideas in these efforts — the use of informal proofs and repair models
— are orthogonal to our approach.
**Reinforcement Learning for Theorem-Proving. Kaliszyk et al. (2018) pioneered the use of RL**
in theorem-proving; subsequently, Wu et al. (2021) gave TacticZero, a deep RL approach to the
problem. TacticZero does not use LLMs, thus missing out on a key source of generic mathematical
knowledge. Also, COPRA has retrieval capabilities that TacticZero lacks.
**Advanced Prompting Strategies. Several prompting strategies like Chain-of-Thought (CoT) (Wei**
et al., 2022), Tree-of-Thought (ToT) (Yao et al., 2023), and Graph-of-Thought (GoT) (Besta et al.,
2023) have recently emerged for modeling reasoning using LLMs. However, thought generation is
not sufficient to emulate the rigorous verification that a formal proof environment can perform. This
is why we use an approach based on language agents.
**Language Agents. Several distinct LLM agent architectures have been proposed over the last year**
(Significant-Gravitas, 2023; Yao et al., 2022; Shinn et al., 2023; Wang et al., 2023). These models
combine an LLM’s capability to use tools Schick et al. (2023), decompose tasks into subtasks (Wei
et al., 2022; Yao et al., 2023), and self-reflect (Shinn et al., 2023). However, we are the first to offer
an LLM agent for theorem-proving. We also distinguish ourselves from prior work along these lines
by introducing a more efficient stateful search in the policy.
-----
6 CONCLUSION
We have presented COPRA, the first LLM-agent approach to formal theorem-proving. The approach
departs from prior LLM-based theorem-proving techniques by explicitly modeling the interaction
between the prover and the proof environment. It also goes beyond prior language-agent approaches
for any domain in using a stateful backtracking search within the policy.
Many questions remain open. First, we gave our GPT-4 a budget of a maximum of 60 inferences
per problem for cost reasons. Whether the learning dynamics would drastically change with a much
larger inference budget remains to be seen. A related question is whether a GPT-4-scale model is
truly essential for our task. We have shown that the cheaper GPT-3.5 agent is not competitive against
our GPT-4 agent; however, it is possible that a different, more affordable foundation model would
have done better. Finally, our proof MDP also enables approaches where an LLM policy is finetuned using RL. It remains to be seen how such an approach, done by necessity with smaller-scale
models, would compare with our in-context-learning approach.
**Funding Acknowledgements.** This work was partially supported by NSF awards CCF-1918651
and CCF-2212559.
REFERENCES
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna
Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al.
Graph of thoughts: Solving elaborate problems with large language models. _arXiv preprint_
_arXiv:2308.09687, 2023._
Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, and Jakob von Raumer. The
Lean theorem prover (system description). In Automated Deduction-CADE-25: 25th Interna_tional Conference on Automated Deduction, Berlin, Germany, August 1-7, 2015, Proceedings 25,_
pp. 378–388. Springer, 2015.
Emily First, Markus N Rabe, Talia Ringer, and Yuriy Brun. Baldur: whole-proof generation and
repair with large language models. arXiv preprint arXiv:2303.04910, 2023.
Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, and Stanislas Polu. Proof artifact
co-training for theorem proving with language models. arXiv preprint arXiv:2102.06203, 2021.
Daniel Huang, Prafulla Dhariwal, Dawn Song, and Ilya Sutskever. Gamepad: A learning environment for theorem proving. In ICLR, 2019.
G´erard Huet, Gilles Kahn, and Christine Paulin-Mohring. The coq proof assistant a tutorial. Rapport
_Technique, 178, 1997._
Albert Q Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timoth´ee
Lacroix, Yuhuai Wu, and Guillaume Lample. Draft, sketch, and prove: Guiding formal theorem
provers with informal proofs. arXiv preprint arXiv:2210.12283, 2022.
Cezary Kaliszyk, Josef Urban, Henryk Michalewski, and Miroslav Olˇs´ak. Reinforcement learning
of theorem proving. Advances in Neural Information Processing Systems, 31, 2018.
Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury Hayat,
Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neural theorem
proving. Advances in Neural Information Processing Systems, 35:26337–26349, 2022.
Xavier Leroy. Formal verification of a realistic compiler. Communications of the ACM, 52(7):
107–115, 2009.
Allen Newell, John Clifford Shaw, and Herbert A Simon. Empirical explorations of the logic theory
machine: a case study in heuristic. In Papers presented at the February 26-28, 1957, western
_joint computer conference: Techniques for reliability, pp. 218–230, 1957._
OpenAI. Gpt-4 technical report, 2023.
-----
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_arXiv preprint arXiv:2009.03393, 2020._
Alex Sanchez-Stern, Yousef Alhessi, Lawrence Saul, and Sorin Lerner. Generating correctness
proofs with neural networks. In Proceedings of the 4th ACM SIGPLAN International Workshop
_on Machine Learning and Programming Languages, pp. 1–10, 2020._
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer,
Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to
use tools. arXiv preprint arXiv:2302.04761, 2023.
Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and
Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint
_arXiv:2303.11366, 2023._
Significant-Gravitas. Autogpt. https://github.com/Significant-Gravitas/Auto-GPT, 2023.
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan,
and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models.
_arXiv preprint arXiv:2305.16291, 2023._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
_Neural Information Processing Systems, 35:24824–24837, 2022._
Minchao Wu, Michael Norrish, Christian Walder, and Amir Dezfouli. Tacticzero: Learning to
prove theorems from scratch with deep reinforcement learning. Advances in Neural Information
_Processing Systems, 34:9330–9342, 2021._
Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants. In
_International Conference on Machine Learning, pp. 6984–6994. PMLR, 2019._
Kaiyu Yang, Aidan M Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil,
Ryan Prenger, and Anima Anandkumar. Leandojo: Theorem proving with retrieval-augmented
language models. arXiv preprint arXiv:2306.15626, 2023.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.
React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629,
2022.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv
_preprint arXiv:2305.10601, 2023._
Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark for
formal olympiad-level mathematics. arXiv preprint arXiv:2109.00110, 2021.
A APPENDIX
A.1 EXAMPLE PROOFS GENERATED FOR MINIF2F
Fig. 9 shows some other interesting proofs generated by our approach on miniF2F dataset.
A.2 SYSTEM PROMPTS FOR MINIF2F
Parts of the ‘system prompt’ used by COPRA for theorem proving in Lean are shown in Fig. 10.
A.3 SYSTEM PROMPTS FOR COMPCERT
Parts of the ‘system prompt’ used by COPRA for theorem proving in Coq are shown in Fig. 11.
-----
(a)
theorem
mathd_algebra_246
(a b : R)
(f :(h0 : Rˆ4 - b ∀ →x, f x *R) xˆ2 = a + * x x +
5)
(h2 : f (-3) = 2) :
f 3 = 8 :=
begin
rw h0,
rw h0 at h2,
ring_nf,
linarith,
end
(b)
theorem
mathd_algebra_270
(f :(h0 : R ∀1 / (x →x ̸=R + -2, f x) 2)) : =
f (f 1) = 3/7 :=
begin
have h1 : f 1 = 1
/ (1 + 2),
apply h0,
linarith,
rw h1,
rw h0,
field_simp,
ring,
norm_num,
end
(e)
theorem
mathd_algebra_107
(x y : R)
(h0 : xˆ2 + 8 * x + y
ˆ2 - 6 * y = 0) :
(x + 4)ˆ2 + (y-3)ˆ2 =
5ˆ2 :=
begin
rw pow_two at h0,
rw add_assoc at h0
,
rw add_comm (x * x
) (8 * x + y ˆ 2)
at h0,
rw add_comm (8 * x
) (y ˆ 2) at h0,
rw add_assoc at h0
,
rw add_comm (y ˆ
2) (8 * x + x * x)
at h0,
rw add_assoc at h0
,
ring_nf at h0,
rw pow_two,
rw pow_two,
ring_nf,
rw ←add_assoc,
rw h0,
linarith,
end
(c)
theorem
mathd_algebra_44
(s t : R)
(h0 : s = 9 - 2 * t)
(h1 : t = 3 * s + 1) :
sbegin = 1 ∧ t = 4 :=
split,
{
rw h0 at h1,
linarith
},
rw h0 at h1,
rw h1 at h0,
linarith,
end
(d)
theorem amc12b_2002_p2
(x : Z)
(h0 : x = 4) :
(3 * x - 2) * (4 * x +
1) - (3 * x - 2)
- (4 * x) + 1 = 11
:=
begin
ring_nf,
rw h0,
ring,
end
Figure 9: Some other interesting proofs generated for miniF2F by COPRA. The length of the proofs
generated shows that interaction with the environment helps in fixing the errors encountered while
writing long proofs. These long sequences of rewrites are not easy to synthesize without knowing
the exact verbal reward from the environment which often contains the hint to fix the rewrites.
-----
You are a proficient formal theorem-proving agent in Lean 3. You can predict
_,_ the next proof step given the current proof state. The proof state is
_→_
_,_ described in the following format:
_→_
**1. All the goals are described under `[GOALS]` keyword. Each goal within**
_,→_ the `[GOALS]` is described under the keyword `[GOAL] i`, where `i` is a
_,→_ positive integer. For example, `[GOAL] 1`, `[GOAL] 2`, etc.
**2. Within each `[GOAL] i` keyword, the goal is described as a human-readable**
_,→_ serialized version of the proof state as shown while running `lean`
_,_ command. Each goal, might also accompany some hypotheses, which are
_→_
_,→_ described under the keyword `[HYPOTHESES] i`. Each hypothesis within
_,→_ ``[HYPOTHESES]`, starts with the prefix `[HYPOTHESIS]`.`
**3. Sometimes `[GOALS]` can have description about the proof state like**
_,→_ ``Proof finished`, `There are unfocused goals`, `Not in proof mode`,`
_,→_ etc. The description is described under the keyword `[DESCRIPTION]`.
**4. Finally, `[STEPS]` keyword is used to describe proof-steps used so far.**
_,→_ Each proof step starts with the prefix `[STEP]`, and is a valid Lean
_,→_ tactic. For example, `[STEPS][STEP]rw h1 at h2,[STEP]{linarith},`.
**5. Sometimes, `[INCORRECT STEPS]` keyword optionally used to describe**
_,_ proof-steps which should NOT be generated. Use this as a hint for not
_→_
_,_ generating these proof-steps again as they failed previously. For
_→_
_,→_ example, `[INCORRECT STEPS][STEP]apply h1,[STEP]rw ←h1`.
**6. There is also an optional `[LAST STEP]` keyword which describes the**
_,_ proof-step generated last time. If the proof-step was incorrect, then
_→_
_,_ it is also followed by error message from Coq environment. For example,
_→_
_,→_ ``[LAST STEP]linarith,\n[ERROR MESSAGE]linarith failed to find a`
_,→_ contradiction\nstate:\nx y : R,\nh1 : x = 3 - 2 * y,\nh2 : 2 * x - y =
_,→_ 1\n false`. If the proof-step was correct then it is followed by the
_,→_ keyword⊢ ``[SUCCESS]`. For example, `[LAST STEP]linarith,[SUCCESS]`.`
_,_ Don't generate the last proof-step again if it was NOT successful.
_→_
**7. Sometimes there can be errors in the format of the generated response.**
_,→_ This is reported using the keyword `[ERROR]` followed by the error
_,→_ message. For example, `[ERROR]\nInvalid response:\n'Great! The proof is
_,_ complete.', \nStopping Reason: 'stop'.\n Please respond only in the
_→_
_,→_ format specified.[END]`. This means that the response generated by you
_,_ was not in the specified format. Please follow the specified format
_→_
_,_ strictly.
_→_
If you think you know the next proof step, then start your response with
_,→_ ``[RUN TACTIC]` followed by the next proof-step which will help in`
_,→_ simplifying the current proof state. For example, `[RUN
_,→_ TACTIC]induction c,[END]`. Generate exactly ONE proof-step. Multiple
_,_ proof steps are more error prone, because you will not get a chance to
_→_
_,_ see intermediate proof state descriptions. Make sure that the proof
_→_
_,_ step is valid and compiles correctly in Lean 3.
_→_
You can refer to the example conversation to understand the response format
_,_ better. It might also contain some similar proof states and their
_→_
_,_ corresponding proof-steps.
_→_
Please take a note of the following:
**1. Make sure to end all your responses with the keyword `[END]`. Follow the**
_,_ specified format strictly.
_→_
**2. While generating `[RUN TACTIC]` keyword, do NOT generate the tactics**
_,→_ mentioned under `[INCORRECT STEPS]`......
..............
Figure 10: Parts of ‘system prompt’ used by COPRA for Lean
-----
You are a proficient formal theorem-proving agent in Coq. You can predict
_,_ the next proof step given the current proof state, relevant
_→_
_,_ definitions, and some possible useful lemmas/theorems. The proof state
_→_
_,_ is described in the following format:
_→_
**1. All the goals are described under `[GOALS]` keyword. Each goal within**
_,→_ the `[GOALS]` is described under the keyword `[GOAL] i`, where `i` is a
_,→_ positive integer. For example, `[GOAL] 1`, `[GOAL] 2`, etc.
**2. Within each `[GOAL] i` keyword, the goal is described as a human-readable**
_,→_ serialized version of the proof state as shown while running `coqtop`
_,_ command. Each goal, might also accompany some hypotheses, which are
_→_
_,→_ described under the keyword `[HYPOTHESES] i`. Each hypothesis within
_,→_ ``[HYPOTHESES]`, starts with the prefix `[HYPOTHESIS]`. Apart from the`
_,→_ goal and hypothesis, some OPTIONAL keywords like `[DEFINITIONS] i` and
_,→_ ``[THEOREMS] i` are also present which describe the relevant definitions`
_,_ of symbols used in that goal, and some possible useful theorems or
_→_
_,_ lemmas which might help in simplifying the goal. Each definition within
_→_
_,→_ ``[DEFINITIONS]` starts with the prefix `[DEFINITION]`. Similarly, each`
_,→_ theorem/lemma under `[THEOREMS]` keyword starts with the prefix
_,→_ ``[THEOREM]`. These definitions and theorems can be used to simplify the`
_,_ goal using the tactics like rewrite, apply, etc. However, it is also
_→_
_,_ possible that these definitions and theorems are not used at all.
_→_
**3. Sometimes `[GOALS]` can have description about the proof state like**
_,→_ ``Proof finished`, `There are unfocused goals`, `Not in proof mode`,`
_,→_ etc. The description is described under the keyword `[DESCRIPTION]`.
**4. Finally, `[STEPS]` keyword is used to describe proof-steps used so far.**
_,→_ Each proof step starts with the prefix `[STEP]`, and is a valid Coq
_,→_ tactic ending with a `.`. For example, `[STEPS][STEP]intros
_,→_ a.[STEP]induction a.`.
**5. Sometimes, `[INCORRECT STEPS]` keyword optionally used to describe**
_,_ proof-steps which should NOT be generated. Use this as a hint for not
_→_
_,_ generating these proof-steps again as they failed previously. For
_→_
_,→_ example, `[INCORRECT STEPS][STEP]apply mul_assoc.[STEP]rewrite <- H.`.
**6. There is also an optional `[LAST STEP]` keyword which describes the**
_,_ proof-step generated last time. If the proof-step was incorrect, then
_→_
_,_ it is also followed by error message from Coq environment. For example,
_→_
_,→_ ``[LAST STEP]reflexivity.[ERROR MESSAGE]Error: In environment\nn :`
_,→_ nat\nUnable to unify "n" with "n + 0".`. If the proof-step was correct
_,→_ then it is followed by the keyword `[SUCCESS]`. For example, `[LAST
_,→_ STEP]reflexivity.[SUCCESS]`. Don't generate the last proof-step again
_,_ if it was NOT successful.
_→_
**7. Sometimes there can be errors in the format of the generated response.**
_,→_ This is reported using the keyword `[ERROR]` followed by the error
_,→_ message. For example, `[ERROR]\nInvalid response:\n'Great! The proof is
_,_ complete.', \nStopping Reason: 'stop'.\n Please respond only in the
_→_
_,→_ format specified.[END]`. This means that the response generated by you
_,_ was not in the specified format. Please follow the specified format
_→_
_,_ strictly.
_→_
If you think you know the next proof step, then start your response with
_,→_ ``[RUN TACTIC]` followed by the next proof-step which will help in`
_,→_ simplifying the current proof state. For example, `[RUN TACTIC]destruct
_,→_ c.[END]`. Generate exactly ONE proof-step. Multiple proof steps are
_,_ more error prone, because you will not get a chance to see intermediate
_→_
_,_ proof state descriptions. Make sure that the proof step is valid and
_→_
_,_ compiles correctly with Coq.
_→_
........................
Figure 11: Parts of ‘system prompt’ used by COPRA for Coq
-----
(a)
gss :
**forall l v m,**
(set l v m) l = match l with R
_,_ r =>
_→_
v | S sl ofs ty =>
Val.load_result
_,_ (chunk_of_type ty) v
_→_
_,_ **end.**
_→_
**Proof.**
**intros l v m.**
**destruct l as [r | s o t].**
- unfold set.
**destruct (Loc.eq (R r) (R**
_,_ r)); [reflexivity |
_→_
_,_ **contradiction].**
_→_
- unfold set.
**destruct (Loc.eq (S s o t)**
_,_ (S s o t));
_→_
_,_ [reflexivity |
_→_
_,_ **contradiction].**
_→_
**Qed.**
(b)
eq : forall (p q: loc), {p =
_,_ q} + {p <> q}.
_→_
**Proof.**
decide equality.
- apply mreg_eq.
- decide equality.
- decide equality.
**apply Pos.eq_dec.**
decide equality.
- decide equality.
**Qed.**
(c)
disjoint_cons_right
_,_ :
_→_
**forall a l1 l2,**
disjoint l1 (a ::
_,_ l2) -> disjoint
_→_
_,_ l1 l2.
_→_
**Proof.**
**intros a l1 l2**
_,_ H.
_→_
**unfold**
_,_ disjoint.
_→_
**intros x1 x2 H1**
_,_ H2.
_→_
**apply H.**
**assumption.**
**right.**
**assumption.**
**Qed.**
(d)
eq_int_type :
_,_ **forall (x y:**
_→_
_,_ int_type),
_→_
_,_ {x=y} + {x<>y}.
_→_
**Proof.**
decide
_,_ equality.
_→_
**Qed.**
(e)
set_locals_lessdef
_,_ : **forall e1**
_→_
_,_ e2,
_→_
_,_ env_lessdef e1
_→_
_,_ e2 -> **forall**
_→_
_,_ il,
_→_
_,_ env_lessdef
_→_
_,_ (set_locals il
_→_
_,_ e1)
_→_
_,_ (set_locals il
_→_
_,_ e2).
_→_
**Proof.**
**intros e1 e2 H.**
**induction il as**
_,_ [| a il'].
_→_
- apply H.
- intros.
**apply**
_,_ set_var_lessdef.
_→_
**apply IHil'.**
**apply**
_,_ Val.lessdef_refl.
_→_
**Qed.**
Figure 12: Some other interesting proofs generated for CompCert by COPRA. We can see that
these proofs are long, and often use ‘apply’ tactic which shows that COPRA can effectively use the
retrieved information to discharge the current proof state.
A.4 EXAMPLE PROOFS GENERATED FOR COMPCERT
Fig. 12 shows some interesting proofs generated by our approach on the CompCert dataset.
-----
# A Language-Agent Approach to Formal Theorem-Proving
**Amitayush Thakur, Yeming Wen & Swarat Chaudhuri**
Department of Computer Science
The University of Texas at Austin
Austin, TX, USA
{amitayush, ywen}@utexas.edu, [email protected]
**Abstract**
Language agents, which use a large language model (LLM) capable of in-context
learning to interact with an external environment, have emerged as a promising
approach to control tasks. We present a language-agent approach that offers
state-of-the-art performance in formal theorem-proving. Our method, COPRA,
uses a high-capacity, black-box LLM (GPT-4) as part of a policy for a stateful
backtracking search. During the search, the policy can select proof tactics and
retrieve lemmas and definitions from an external database. Each selected tactic is
executed in the underlying proof framework, and the execution feedback is used
to build the prompt for the next policy invocation. The search also tracks selected
information from its history and uses it to reduce hallucinations and unnecessary
LLM queries. We evaluate COPRA on the miniF2F benchmark for Lean and a
set of Coq tasks from the Compcert project. On these benchmarks, COPRA is
significantly better than one-shot invocations of GPT-4, as well as state-of-the-art
models fine-tuned on proof data, at finding correct proofs quickly.
**1** **Introduction**
Automatically proving formal theorems (Newell et al., 1957) is a longstanding challenge in computer
science. Autoregressive language models (Polu & Sutskever, 2020; Han et al., 2021; Yang et al.,
2023) have recently emerged as an effective approach to this problem.
A weakness of this method is that it does not model the interaction between the model and the
underlying proof framework. The application of a tactic is an action that changes the state of the proof
and the interpretation of future tactics. By ignoring these game-like dynamics, autoregressive models
miss out on a valuable source of feedback and end up being more susceptible to hallucinations.
In this paper, we show that the nascent paradigm of large-language-model (LLM) agents (Yao et al.,
2022; Wang et al., 2023; Shinn et al., 2023) can help address this weakness. Here, one uses an LLM
as a agent that interacts with an external environment. Information gathered through interaction is
used to update the LLM’s prompt, eliciting new agent behavior because of in-context learning.
Our approach, called COPRA[1] (Figure 1), uses an off-the-shelf, high-capacity LLM (GPT-4 (OpenAI,
2023)) as part of a policy in that interacts with a proof environment like Coq or Lean. At each time
step, the policy consumes a textual prompt and chooses to use an available tactic, or backtrack, or
retrieve relevant lemmas and definitions from an external corpus. When the policy selects a tactic, we
“execute” it using the underlying proof assistant. The feedback from the execution is used to construct
a new prompt for the policy, and the process repeats.
1COPRA is an acronym for “In-context Prover Agent”.
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Workshop on MATH-AI.
-----
Figure 1: An overview of COPRA. The system implements a policy that interacts with a proof
environment (Coq or Lean). Internally, a COPRA policy consists of an LLM (GPT-4), a stackbased backtracking search, a retrieval mechanism, a dictionary tracking past failures, and a prompt
serialization protocol that constructs LLM prompts using the stack and environment feedback and
parse LLM outputs into actions.
We have integrated COPRA with both the Coq and the Lean environments. We evaluate the system
using the miniF2F Zheng et al. (2021) benchmark for competition-level mathematical reasoning in
Lean and a set of Coq proof tasks (Sanchez-Stern et al., 2020) from the Compcert (Leroy, 2009)
project on verified compilation. Using a new metric called prove-at-k-inferences, we show that
COPRA can converge to correct proofs faster than competing approaches, including the state-of-theart models (Yang et al., 2023; Sanchez-Stern et al., 2020) trained on formal proof data. We also show
that when COPRA fails, it fails quicker than the baseline methods.
**1.1** **Problem Formulation**
COPRA is based on a view of automatic theorem-proving as a control problem. Like prior work
on reinforcement learning (RL) for proof synthesis (Wu et al., 2021), we view a theorem-prover
as a policy that interacts with a stateful proof environment (e.g., Lean) and model the interaction
between the policy and the environment as a deterministic Markov Decision Process (MDP). We
depart from prior RL-based work for theorem-proving by imposing a partial order on MDP states,
allowing rewards to have a textual component, and allowing history-dependent policies.
Now we describe the different components of our proof MDP
**States. Let an obligation be a pair (g, h), where g is a goal and h a hypothesis. A state of the MDP**
is either a special symbol called error or a set O = _o1, . . ., ok_ of obligations oi. The MDP has a
_{_ _}_
unique initial state oin with a single obligation (gin _, hin_ ), where the goal gin and the hypothesis hin
are extracted from the user-provided theorem that we are trying to prove. Its unique final state QED is
the empty obligation set.
Following Sanchez-Stern et al. (2020), we define a partial order ⊑ over states that defines when a
state is “at least as hard” than another and use it to avoid actions that do not lead to progress in the
-----
proof. Formally, for states O1 and O2 with O1 ̸= error and O2 ̸= error, O1 ⊑ _O2 iff_
_∀_ _oi = (gi, hi) ∈_ _O1. ∃ok = (gk, hk) ∈_ _O2. gk = gi ∧_ (hk → _hi)._
we have an efficient symbolic procedure that can check this relationship for any pair of states. TheIntuitively, O1 ⊑ _O2 if for every obligation in O1, there is a stronger obligation in O2. We assume_
procedure is sound, meaning that if it reports O1 _O2, the relationship actually holds. However, it_
is incomplete, i.e., it may not detect all relationships of the form ⊑ _O1_ _O2._
_⊑_
**Actions and Transitions. The actions in our MDP are the proof environment’s tactics. The transition**
function T (O, a) determines the result of applying an action a to a state O. When a is a tactic, we
assume the underlying proof environment to return a state O[′] that results from applying a to O. If a
is a “bad” tactic, then O[′] equals error ; otherwise, O[′] is a new set of obligations. We assume that our
agent can evaluate T (O, a) for any state O and action a. While this assumption is unacceptable in
many MDP problems, it is reasonable in the theorem-proving setting.
**Rewards. As usual, we assume a reward function R(O, a) that evaluates an action a at a state O.**
Historically, such functions are scalar-valued; however, because we use LLMs as policies, we allow
rewards to also include rich textual feedback from the proof environment. Concretely, we consider
rewards of the form R(O, a) = (˜r, w), where: (1) ˜r is a very high positive value if T (O, a) = QED,
a negative value if T (O, a) = error, and 0 otherwise, and (2) w is the feedback from the proof
environment when a is executed from O.
COPRA(O)
1 PUSH(st, O)
2 _ρ ←_ RETRIEVE(O)
3 **for j ←** 1 to k
4 **do p ←** PROMPTIFY(st, Bad (O), ρ, r)
5 _a ∼_ PARSEACTION(LLM(p))
6 _O[′]_ _←_ _T_ (O, a), r ← _R(O, a)_
7 **if O[′]** = QED
8 **then terminate successfully**
9 **else if O[′]** = error or
_∃O[′′]_ _∈_ _st. O[′′]_ _⊑_ _O[′]_
10 **then add a to Bad** (O)
11 **else COPRA(O[′])**
12 POP(st)
**2** **The COPRA Agent**
A COPRA policy has access to an LLM
(in practice, GPT-4) and performs a
depth-first search. During the search, it
records information about failed actions.
It also uses the ⊑ relation over states to
checks that it is making progress on the
proof.
Figure 2 shows pseudocode for such a
policy. The policy maintains a stack of
MDP states and a “failure dictionary”
_Bad that maps a state to a set of actions_
that are known to be “unproductive” at
the state. At each search step, the algorithm pushes the current state on the
stack and retrieves external lemmas and
definitions relevant to the state. After
this, it repeatedly serializes the stack
and Bad (O) into a prompt and feeds it
to the LLM. The LLM’s output is parsed
into an action, and the agent executes it
in the environment.
Figure 2: The search procedure in COPRA. T is the envi
the state. At each search step, the al
ronment’s transition function and R is the reward function.
gorithm pushes the current state on the
_st is a stack, initialized to be empty. Bad_ (O) is a set of ac
stack and retrieves external lemmas and
tions, initialized to, that are known to be bad at O. LLM is
definitions relevant to the state. After _∅_
an LLM, PROMPTIFY generates a prompt, PARSEACTION
this, it repeatedly serializes the stack
parses the output of the LLM into an action (repeatedly
and Bad (O) into a prompt and feeds it
querying the LLM in case there are formatting errors in its
to the LLM. The LLM’s output is parsed
output), and RETRIEVE gathers relevant lemmas and defi
into an action, and the agent executes it
nitions from an external source. The procedure is initially
in the environment.
called with argument Oin .
One outcome of the action could be that
the agent arrives at QED. Alternatively, the new state could be an error or represent obligations that
are at least as hard as what is currently on the stack (for example, this could be because of a cycle in
a tactic). In this case, the agent rejects the new state. Otherwise, it recursively continues the proof
from the new state. After issuing a few queries to the LLM, the agent backtracks.
**Prompt Serialization Protocol. The routines PROMPTIFY and PARSEACTION together constitute**
the prompt serialization protocol and are critical to the success of the policy. Now we elaborate on
these procedures.
PROMPTIFY carefully places the different pieces of information relevant to the proof in the prompt.
It also includes logic for trimming this information to fit the most relevant parts in the LLM’s context
window. Every prompt has two parts: the “system prompt" and the “agent prompt".
-----
The agent prompts are synthetically generated using a context-free grammar and contain information
about the state stack (including the current proof state), the textual reward for the previous action,
and the set of actions we know to avoid at the current proof state.
The system prompt describes the rules of engagement for the LLM. It contains a grammar (distinct
from the one for agent prompts) that we expect the LLMs to follow when it proposes a course of action.
The grammar carefully incorporates cases when the response is incomplete because of the LLM’s
token limits. We parse partial responses to extract the next action using the PARSEACTION routine.
PARSEACTION also identifies formatting errors (if any) in the LLM’s responses, possibly communicating with the LLM multiple times until these errors are resolved. Figure 5 (in Appendix A.1) shows
an example back-and-forth between COPRA and LLM via the prompt serialization protocol.
**3** **Evaluation**
Our findings about COPRA are that: (i) the approach can find proofs significantly quicker than the
state-of-the-art finetuning-based baselines, both in terms of number of LLM queries and wall-clock
time; (ii) in problems where all current methods fail, COPRA fails faster; (iii) the use of GPT-4, as
opposed to GPT-3.5, within the agent is essential for success; and (iv) backtracking significantly
improves the system’s performance on harder problems. Now we elaborate on our experimental
methodology and these results.
**Implementing COPRA. The details of our implementation are mentioned in Appendix A.2.1.**
**Benchmarks.** We evaluate our approach on two domains: (i) miniF2F (Zheng et al., 2021), a collection of
244 Lean formalizations of mathematics competition problems, solved using a range of techniques such as induction,
algebraic manipulation, and contradiction; and (ii) a set of
Coq problems from the CompCert compiler verification
project (Leroy, 2009) that was previously used to evaluate
the PROVERBOT9001 system Sanchez-Stern et al. (2020).
**Baselines. We compare with one-shot invocations of GPT-**
3.5 and GPT-4 in both the miniF2F and the Compcert domains. We also consider an ablation of COPRA that uses
GPT-3.5 as its LLM and another that does not use backtracking. Our fine-tuned baseline for the miniF2F domain
is REPROVER, a state-of-the-art open-source prover that
Figure 3: COPRA vs. REPROVER on the
is part of the Leandojo project (Yang et al., 2023). In the
`miniF2F benchmark` Compcert domain, we compare with PROVERBOT9001
(Sanchez-Stern et al., 2020), which, while not LLM-based, is the best publicly available model for
Coq. More details about the baselines in Appendix A.2.2.
**Metric: pass@k-inferences.** The standard metric for
evaluating theorem-provers is pass@k (Lample et al.,
2022; Yang et al., 2023). However, a key objective of
our research is to discover proofs quickly, with fewer LLM
queries and lower wall-clock time. The pass@k metric
does not evaluate this characteristic as it does not quantify
the number of LLM queries or amount of time needed by
a proof attempt.
To address this concern, we introduce a new metric,
_pass@k-inferences, and evaluate COPRA and its competi-_
tors using this metric. More details about metric in Appendix A.2.3.
**Results Figure 3 and Figure 4 shows that COPRA out-**
performs the fine-tuned baselines for the miniF2F and
CompCert domain respectively. We cover more details
about results and ablation in Appendix A.3.
Figure 4: COPRA vs. PROVERBOT9001
on the Compcert benchmark
-----
**References**
Jesse Michael Han, Jason Rute, Yuhuai Wu, Edward W Ayers, and Stanislas Polu. Proof artifact
co-training for theorem proving with language models. arXiv preprint arXiv:2102.06203, 2021.
Guillaume Lample, Timothee Lacroix, Marie-Anne Lachaux, Aurelien Rodriguez, Amaury Hayat,
Thibaut Lavril, Gabriel Ebner, and Xavier Martinet. Hypertree proof search for neural theorem
proving. Advances in Neural Information Processing Systems, 35:26337–26349, 2022.
Xavier Leroy. Formal verification of a realistic compiler. Communications of the ACM, 52(7):
107–115, 2009.
Allen Newell, John Clifford Shaw, and Herbert A Simon. Empirical explorations of the logic theory
machine: a case study in heuristic. In Papers presented at the February 26-28, 1957, western joint
_computer conference: Techniques for reliability, pp. 218–230, 1957._
OpenAI. Gpt-4 technical report, 2023.
Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_arXiv preprint arXiv:2009.03393, 2020._
Alex Sanchez-Stern, Yousef Alhessi, Lawrence Saul, and Sorin Lerner. Generating correctness
proofs with neural networks. In Proceedings of the 4th ACM SIGPLAN International Workshop on
_Machine Learning and Programming Languages, pp. 1–10, 2020._
Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and
Shunyu Yao. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint
_arXiv:2303.11366, 2023._
Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and
Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. arXiv
_preprint arXiv:2305.16291, 2023._
Minchao Wu, Michael Norrish, Christian Walder, and Amir Dezfouli. Tacticzero: Learning to
prove theorems from scratch with deep reinforcement learning. Advances in Neural Information
_Processing Systems, 34:9330–9342, 2021._
Kaiyu Yang, Aidan M Swope, Alex Gu, Rahul Chalamala, Peiyang Song, Shixing Yu, Saad Godil,
Ryan Prenger, and Anima Anandkumar. Leandojo: Theorem proving with retrieval-augmented
language models. arXiv preprint arXiv:2306.15626, 2023.
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.
React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629,
2022.
Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark for
formal olympiad-level mathematics. arXiv preprint arXiv:2109.00110, 2021.
**A** **Appendix**
**A.1** **Prompt Serialization Protocol Example**
Figure 5 shows the back-and-forth between the agent and LLM via PSP for a given goal.
**A.2** **Evaluation Details**
**A.2.1** **Implementation Details of COPRA**
Our implementation of COPRA has GPT-4 as the underlying LLM and can interact with both the
Lean and the Coq proof environments. Because of the substantial cost of GPT-4 queries, we cap the
number of LLM queries that COPRA can make by 60. To further reduce costs, COPRA first tries to
-----
Figure 5: The prompt serialization protocol. We highlight the different parts of the prompts to show
how we use the state stack and the textual reward from the environment.
prove its theorems via a single LLM query (one-shot prompting). It only invokes its agent behavior
when the one-shot prompting fails to find a proof.
The “system prompt" in the one-shot approach is slightly different than that for COPRA, containing
instructions to generate a proof in one go rather than step by step. For both COPRA and the one-shot
baselines, the prompt contains a single proof example that clarifies how proofs need to be formatted.
This proof example remains the same for all test cases.
**A.2.2** **Baseline Details**
A challenge with the REPROVER baseline is that like COPRA, it uses a retrieval mechanism. However,
building a comparable retriever for COPRA would require an indexed training corpus on problems
relevant to miniF2F. However, miniF2F is only an evaluation set and does not come with a training
corpus. As a result, for an apples-to-apples comparison, our evaluation on miniF2F turns off COPRA’s
and REPROVER’s retrievers.
In the Compcert domain, we compare with PROVERBOT9001 (Sanchez-Stern et al., 2020), which,
while not LLM-based, is the best publicly available model for Coq. Unlike miniF2F, this benchmark
comes with a large training set as well as a test set, and we use the training set for retrieving relevant
lemmas and definitions. Our retrieval mechanism, in this case, is a simple BM25 search.
For cost reasons, our evaluation for Compcert uses 118 out the 501 theorems used in the original
evaluation of PROVERBOT9001 Sanchez-Stern et al. (2020). For fairness, we include all the 98
theorems proved by PROVERBOT9001 in our subset. The remaining theorems are randomly sampled.
**A.2.3** **Metric: pass@k-inferences.**
The standard metric for evaluating theorem-provers is pass@k (Lample et al., 2022; Yang et al., 2023).
In this metric, a prover is given a budget of k proof attempts; the method is considered successful if
one of these attempts leads to success. However, a key objective of our research is to discover proofs
_quickly, with fewer LLM queries and lower wall-clock time. The pass@k metric does not evaluate_
-----
Table 1: Aggregate statistics for COPRA and the baselines on miniF2F and Compcert
|Approach|# Theorems proved /# Theorems|% proved|Avg. Inferences in Total|Avg. Inferences on Failure|Avg. Inferences on Pass|
|---|---|---|---|---|---|
|miniF2F Test Dataset||||||
|GPT 3.5 Few Shot GPT 4 Few Shot COPRA (GPT-3.5) ReProver COPRA (GPT-4)|7/244 26/244 29/244 54/244 57/244|2.8% 10.6% 11.89% 22.13% 23.36%|1 1 12.83 350.7 20.94|1 1 14.23 427.24 26.79|1 1 2.45 81.6 1.75|
|CompCert Test Dataset||||||
|GPT 3.5 One-Shot GPT 4 One-Shot Proverbot COPRA|10/118 36/118 98/118 76/118|8.47% 30.51% 83.05% 64.41%|1 1 184.7 12.9|1 1 256.8 10.9|1 1 170.0 16.57|
Table 2: Average time taken by our approach (COPRA) and ReProver on miniF2F dataset.
|Approach|Avg. Time In Seconds|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||Per Proof|||Per Inference|||
||On Pass|On Fail|All|On Pass|On Fail|All|
|ReProver (on CPU) ReProver (on GPU) COPRA (GPT-3.5) COPRA (GPT-4)|279.19 267.94 39.13 30.21|618.97 601.35 134.26 191.73|543.78 520.74 122.21 140.86|3.42 2.06 15.97 17.26|1.45 0.44 9.43 7.16|1.55 0.48 9.53 6.73|
this characteristic as it does not quantify the number of LLM queries or amount of time needed by a
proof attempt.
To address this concern, we introduce a new metric, pass@k-inferences, and evaluate COPRA and
its competitors using this metric. Here, we measure the number of correct proofs that a prover can
generate with a budget of k or fewer LLM inference queries. One challenge here is that we want this
metric to be correlated number of correct proofs that the prover produces within a wall-clock time
budget; however, the cost of an inference query is proportional to the number of responses generated
per query. To maintain the correlation between the number of inference queries and wall-clock time,
we restrict each inference on LLM to a single response.
**A.3** **Results**
Figure 3 shows our comparison results for the miniF2F domain. As we see, COPRA outperforms
REPROVER, completing, within just 60 inferences, problems that REPROVER could not solve even
after a thousand inferences. This is remarkable given that COPRA is based on a black-box foundation
model and REPROVER was fine-tuned for at least a week on a dataset derived from Lean’s Mathlib
library. For fairness, we ran REPROVER multiple times with 16, 32, and 64 (default) as the maximum
number of inferences per proof step. We obtained success rates of 15.9%, 20.1%, and 22.13% in the
respective cases and took the best for comparison.
Figure 4 shows a comparison between COPRA and PROVERBOT9001.
We find that COPRA is significantly faster than PROVERBOT9001. Since we put a cap of 60 inferences
on COPRA, it cannot prove all the theorems that PROVERBOT9001 eventually proves. However, as
shown in the figure, COPRA proves many more theorems than PROVERBOT9001 if only 60 inferences
as allowed. Specifically, we prove 77.5% of the proofs found by PROVERBOT9001 in less than 60
steps.
Aggregate statistics for the two approaches, as well as a comparison with the one-shot GPT-3.5 and
GPT-4 baselines, appear in Table 1. It is clear from this data that the language-agent approach offers
a significant advantage over the one-shot approach. For example, COPRA solves more than twice as
many problems as the one-shot GPT-4 baseline, which indicates that it does not just rely on GPT-4
recalling the proof from its memory. Also, the use of GPT-4 as opposed to GPT-3.5 seems essential.
-----
Table 3: Ablation showing the effectiveness of backtracking
|Approach|# Theorems proved /# Theorems|% proved|
|---|---|---|
|miniF2F Test Dataset|||
|COPRA (GPT-4) w/o backtracking COPRA (GPT-4)|56/244 57/244|22.95% 23.36%|
|CompCert Test Dataset|||
|COPRA (GPT-4) w/o backtracking COPRA (GPT-4)|52/118 76/118|44.06% 64.41%|
theorem algebra_sqineq_at2malt1
(a : R) :
abegin * (2 - a) ≤ 1 :=
havefrom h : λ x, pow_two_nonneg (1 - x), ∀ (x : R), 0 ≤ (1 - x) ^ 2,
calc a * (2 - a)
= 1 - (1 - a) ^ 2 : by ring
... ≤ 1 : sub_le_self _ (h a),
end
Figure 6: A theorem in the ‘algebra’ category that COPRA could prove but REPROVER could not.
We establish the correlation between the number of inferences needed for a proof and wall-clock time
in Table 2. Although the average time per inference is higher for COPRA, COPRA still finds proofs
almost 9x faster than REPROVER. This can explained by the fact that our search is more effective as
it uses 46x fewer inferences than REPROVER. These inference steps not only contain the average
time spent on generating responses from LLM but at times have some contribution corresponding to
the execution of the tactic on the Lean environment itself.
Table 2 also offers data on when the different approaches report failures. Since REPROVER uses a
timeout for all theorems, we also use a timeout of 11 minutes while considering failures in Table 2.
The data indicates that COPRA is comparatively better at giving up when the problem is too hard to
solve. We also note that less time is spent per inference in case of failure for all approaches.
We show the impact of ablating the backtracking feature of COPRA in Table 3. We note that
backtracking has a greater positive impact in the Compcert domain. We hypothesize that this is
because the Compcert problems are more complex and backtracking helps more when the proofs are
longer.
Finally, we offer an analysis of the different categories of miniF2F problems solved by COPRA
and REPROVER in Figure 7. We see that certain kinds of problems, for example, International
Mathematics Olympiad (IMO) problems and theorems that require induction, are difficult for all
approaches. However, Figure 7b shows that COPRA takes fewer steps consistently across various
categories of problems in miniF2F.
From our qualitative analysis, there are certain kinds of problems where the language-agent approach
seems especially helpful. For instance, Figure 6 shows a problem in the ‘algebra’ category that
REPROVER could not solve. More examples of interesting Coq and Lean proofs that COPRA found
appear in the appendix.
**A.4** **Example Proofs generated For miniF2F**
Fig. 8 shows some other interesting proofs generated by our approach on miniF2F dataset.
-----
(a) Problems solved in different categories (b) Number of inferences in different categories
Figure 7: Breakdown of theorems proved in various categories
**A.5** **System Prompts For miniF2F**
Parts of the ‘system prompt’ used by COPRA for theorem proving in Lean are shown in Fig. 9.
**A.6** **System Prompts For CompCert**
Parts of the ‘system prompt’ used by COPRA for theorem proving in Coq are shown in Fig. 10.
**A.7** **Example Proofs generated For CompCert**
Fig. 11 shows some interesting proofs generated by our approach on the CompCert dataset.
-----
(a)
theorem
mathd_algebra_246
(a b : R)
(f :(h0 : R^4 - b ∀ →x, f x *R) x^2 = a + * x x +
5)
(h2 : f (-3) = 2) :
f 3 = 8 :=
begin
rw h0,
rw h0 at h2,
ring_nf,
linarith,
end
(b)
theorem
mathd_algebra_270
(f :(h0 : R ∀1 / (x →x ̸=R + -2, f x) 2)) : =
f (f 1) = 3/7 :=
begin
have h1 : f 1 = 1
/ (1 + 2),
apply h0,
linarith,
rw h1,
rw h0,
field_simp,
ring,
norm_num,
end
(e)
theorem
mathd_algebra_107
(x y : R)
(h0 : x^2 + 8 * x + y
^2 - 6 * y = 0) :
(x + 4)^2 + (y-3)^2 =
5^2 :=
begin
rw pow_two at h0,
rw add_assoc at h0,
rw add_comm (x * x
) (8 * x + y ^ 2)
at h0,
rw add_comm (8 * x
) (y ^ 2) at h0,
rw add_assoc at h0,
rw add_comm (y ^
2) (8 * x + x * x)
at h0,
rw add_assoc at h0,
ring_nf at h0,
rw pow_two,
rw pow_two,
ring_nf,
rw ←add_assoc,
rw h0,
linarith,
end
(c)
theorem
mathd_algebra_44
(s t : R)
(h0 : s = 9 - 2 * t)
(h1 : t = 3 * s + 1) :
sbegin = 1 ∧ t = 4 :=
split,
{
rw h0 at h1,
linarith
},
rw h0 at h1,
rw h1 at h0,
linarith,
end
(d)
theorem amc12b_2002_p2
(x : Z)
(h0 : x = 4) :
(3 * x - 2) * (4 * x +
1) - (3 * x - 2)
- (4 * x) + 1 = 11
:=
begin
ring_nf,
rw h0,
ring,
end
Figure 8: Some other interesting proofs generated for miniF2F by COPRA. The length of the proofs
generated shows that interaction with the environment helps in fixing the errors encountered while
writing long proofs. These long sequences of rewrites are not easy to synthesize without knowing the
exact verbal reward from the environment which often contains the hint to fix the rewrites.
-----
You are a proficient formal theorem-proving agent in Lean 3. You can predict
_,_ the next proof step given the current proof state. The proof state is
_→_
_,_ described in the following format:
_→_
**1. All the goals are described under `[GOALS]` keyword. Each goal within**
_,→_ the `[GOALS]` is described under the keyword `[GOAL] i`, where `i` is a
_,→_ positive integer. For example, `[GOAL] 1`, `[GOAL] 2`, etc.
**2. Within each `[GOAL] i` keyword, the goal is described as a**
_,_ human-readable serialized version of the proof state as shown while
_→_
_,→_ running `lean` command. Each goal, might also accompany some hypotheses,
_,→_ which are described under the keyword `[HYPOTHESES] i`. Each hypothesis
_,→_ within `[HYPOTHESES]`, starts with the prefix `[HYPOTHESIS]`.
**3. Sometimes `[GOALS]` can have description about the proof state like**
_,→_ ``Proof finished`, `There are unfocused goals`, `Not in proof mode`, etc.`
_,→_ The description is described under the keyword `[DESCRIPTION]`.
**4. Finally, `[STEPS]` keyword is used to describe proof-steps used so far.**
_,→_ Each proof step starts with the prefix `[STEP]`, and is a valid Lean
_,→_ tactic. For example, `[STEPS][STEP]rw h at h,[STEP]{linarith},`.
**5. Sometimes, `[INCORRECT STEPS]` keyword optionally used to describe**
_,_ proof-steps which should NOT be generated. Use this as a hint for not
_→_
_,_ generating these proof-steps again as they failed previously. For
_→_
_,→_ example, `[INCORRECT STEPS][STEP]apply h,[STEP]rw ←h`.
**6. There is also an optional `[LAST STEP]` keyword which describes the**
_,_ proof-step generated last time. If the proof-step was incorrect, then
_→_
_,_ it is also followed by error message from Coq environment. For example,
_→_
_,→_ ``[LAST STEP]linarith,\n[ERROR MESSAGE]linarith failed to find a`
_,_ contradiction\nstate:\nx y :,\nh : x = 3 - 2 * y,\nh : 2 * x - y = 1\n
_→_
_,→_ false`. If the proof-step was correct then it is followed by the
_,→_ keyword `[SUCCESS]`. For example, `[LAST STEP]linarith,[SUCCESS]`.
_,_ Don't generate the last proof-step again if it was NOT successful.
_→_
**7. Sometimes there can be errors in the format of the generated response.**
_,→_ This is reported using the keyword `[ERROR]` followed by the error
_,→_ message. For example, `[ERROR]\nInvalid response:\n'Great! The proof is
_,_ complete.', \nStopping Reason: 'stop'.\n Please respond only in the
_→_
_,→_ format specified.[END]`. This means that the response generated by you
_,_ was not in the specified format. Please follow the specified format
_→_
_,_ strictly.
_→_
If you think you know the next proof step, then start your response with
_,→_ ``[RUN TACTIC]` followed by the next proof-step which will help in`
_,→_ simplifying the current proof state. For example, `[RUN
_,→_ TACTIC]induction c,[END]`. Generate exactly ONE proof-step. Multiple
_,_ proof steps are more error prone, because you will not get a chance to
_→_
_,_ see intermediate proof state descriptions. Make sure that the proof
_→_
_,_ step is valid and compiles correctly in Lean 3.
_→_
You can refer to the example conversation to understand the response format
_,_ better. It might also contain some similar proof states and their
_→_
_,_ corresponding proof-steps.
_→_
Please take a note of the following:
**1. Make sure to end all your responses with the keyword `[END]`. Follow the**
_,_ specified format strictly.
_→_
**2. While generating `[RUN TACTIC]` keyword, do NOT generate the tactics**
_,→_ mentioned under `[INCORRECT STEPS]`......
..............
Figure 9: Parts of ‘system prompt’ used by COPRA for Lean
-----
You are a proficient formal theorem-proving agent in Coq. You can predict
_,_ the next proof step given the current proof state, relevant definitions,
_→_
_,_ and some possible useful lemmas/theorems. The proof state is described
_→_
_,_ in the following format:
_→_
**1. All the goals are described under `[GOALS]` keyword. Each goal within**
_,→_ the `[GOALS]` is described under the keyword `[GOAL] i`, where `i` is a
_,→_ positive integer. For example, `[GOAL] 1`, `[GOAL] 2`, etc.
**2. Within each `[GOAL] i` keyword, the goal is described as a human-readable**
_,→_ serialized version of the proof state as shown while running `coqtop`
_,_ command. Each goal, might also accompany some hypotheses, which are
_→_
_,→_ described under the keyword `[HYPOTHESES] i`. Each hypothesis within
_,→_ ``[HYPOTHESES]`, starts with the prefix `[HYPOTHESIS]`. Apart from the`
_,→_ goal and hypothesis, some OPTIONAL keywords like `[DEFINITIONS] i` and
_,→_ ``[THEOREMS] i` are also present which describe the relevant definitions`
_,_ of symbols used in that goal, and some possible useful theorems or
_→_
_,_ lemmas which might help in simplifying the goal. Each definition within
_→_
_,→_ ``[DEFINITIONS]` starts with the prefix `[DEFINITION]`. Similarly, each`
_,→_ theorem/lemma under `[THEOREMS]` keyword starts with the prefix
_,→_ ``[THEOREM]`. These definitions and theorems can be used to simplify the`
_,_ goal using the tactics like rewrite, apply, etc. However, it is also
_→_
_,_ possible that these definitions and theorems are not used at all.
_→_
**3. Sometimes `[GOALS]` can have description about the proof state like**
_,→_ ``Proof finished`, `There are unfocused goals`, `Not in proof mode`, etc.`
_,→_ The description is described under the keyword `[DESCRIPTION]`.
**4. Finally, `[STEPS]` keyword is used to describe proof-steps used so far.**
_,→_ Each proof step starts with the prefix `[STEP]`, and is a valid Coq
_,→_ tactic ending with a `.`. For example, `[STEPS][STEP]intros
_,→_ a.[STEP]induction a.`.
**5. Sometimes, `[INCORRECT STEPS]` keyword optionally used to describe**
_,_ proof-steps which should NOT be generated. Use this as a hint for not
_→_
_,_ generating these proof-steps again as they failed previously. For
_→_
_,→_ example, `[INCORRECT STEPS][STEP]apply mul_assoc.[STEP]rewrite <- H.`.
**6. There is also an optional `[LAST STEP]` keyword which describes the**
_,_ proof-step generated last time. If the proof-step was incorrect, then
_→_
_,_ it is also followed by error message from Coq environment. For example,
_→_
_,→_ ``[LAST STEP]reflexivity.[ERROR MESSAGE]Error: In environment\nn :`
_,→_ nat\nUnable to unify "n" with "n + 0".`. If the proof-step was correct
_,→_ then it is followed by the keyword `[SUCCESS]`. For example, `[LAST
_,→_ STEP]reflexivity.[SUCCESS]`. Don't generate the last proof-step again
_,_ if it was NOT successful.
_→_
**7. Sometimes there can be errors in the format of the generated response.**
_,→_ This is reported using the keyword `[ERROR]` followed by the error
_,→_ message. For example, `[ERROR]\nInvalid response:\n'Great! The proof is
_,_ complete.', \nStopping Reason: 'stop'.\n Please respond only in the
_→_
_,→_ format specified.[END]`. This means that the response generated by you
_,_ was not in the specified format. Please follow the specified format
_→_
_,_ strictly.
_→_
If you think you know the next proof step, then start your response with
_,→_ ``[RUN TACTIC]` followed by the next proof-step which will help in`
_,→_ simplifying the current proof state. For example, `[RUN TACTIC]destruct
_,→_ c.[END]`. Generate exactly ONE proof-step. Multiple proof steps are
_,_ more error prone, because you will not get a chance to see intermediate
_→_
_,_ proof state descriptions. Make sure that the proof step is valid and
_→_
_,_ compiles correctly with Coq.
_→_
........................
Figure 10: Parts of ‘system prompt’ used by COPRA for Coq
-----
(a)
gss :
**forall l v m,**
(set l v m) l = match l with R
_,_ r =>
_→_
v | S sl ofs ty =>
Val.load_result
_,_ (chunk_of_type ty) v
_→_
_,_ **end.**
_→_
**Proof.**
**intros l v m.**
**destruct l as [r | s o t].**
- unfold set.
**destruct (Loc.eq (R r) (R**
_,_ r)); [reflexivity |
_→_
_,_ **contradiction].**
_→_
- unfold set.
**destruct (Loc.eq (S s o t)**
_,_ (S s o t));
_→_
_,_ [reflexivity |
_→_
_,_ **contradiction].**
_→_
**Qed.**
(b)
eq : forall (p q: loc), {p =
_,_ q} + {p <> q}.
_→_
**Proof.**
decide equality.
- apply mreg_eq.
- decide equality.
- decide equality.
**apply Pos.eq_dec.**
decide equality.
- decide equality.
**Qed.**
(c)
disjoint_cons_right
_,_ :
_→_
**forall a l1 l2,**
disjoint l1 (a ::
_,_ l2) -> disjoint
_→_
_,_ l1 l2.
_→_
**Proof.**
**intros a l1 l2**
_,_ H.
_→_
**unfold**
_,_ disjoint.
_→_
**intros x1 x2 H1**
_,_ H2.
_→_
**apply H.**
**assumption.**
**right.**
**assumption.**
**Qed.**
(d)
eq_int_type :
_,_ **forall (x y:**
_→_
_,_ int_type),
_→_
_,_ {x=y} + {x<>y}.
_→_
**Proof.**
decide
_,_ equality.
_→_
**Qed.**
(e)
set_locals_lessdef
_,_ : **forall e1**
_→_
_,_ e2,
_→_
_,_ env_lessdef e1
_→_
_,_ e2 -> **forall**
_→_
_,_ il,
_→_
_,_ env_lessdef
_→_
_,_ (set_locals il
_→_
_,_ e1)
_→_
_,_ (set_locals il
_→_
_,_ e2).
_→_
**Proof.**
**intros e1 e2 H.**
**induction il as**
_,_ [| a il'].
_→_
- apply H.
- intros.
**apply**
_,_ set_var_lessdef.
_→_
**apply IHil'.**
**apply**
_,_ Val.lessdef_refl.
_→_
**Qed.**
Figure 11: Some other interesting proofs generated for CompCert by COPRA. We can see that these
proofs are long, and often use ‘apply’ tactic which shows that COPRA can effectively use the retrieved
information to discharge the current proof state.
-----
| [
"Amitayush, Thakur",
"Yeming, Wen",
"Swarat, Chaudhuri"
] | 2023-10-06T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2310.04353v1 | null | null |
A Natural-Language Proof Assistant for Higher-Order Logic | N/A | null | [
"Adam, Dingle"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
A Study of Knowledge Distillation for Theorem Proving in Small Language Models | In this work, we will be comparing the performance in autoformalization of a small language model, and comparing it its own performance when it distills knowledge from a larger teacher model. We use Microsoft’s Phi-2 as the small student model, and OpenAI’s GPT4 as the teacher model. We propose a talk where we will be discussing the ability of Phi-2 to autoformalize, given feedback from the teacher model GPT-4. | null | [
"Shubhra, Mishra"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
A small survey of mathematical abilities of modern transformer architectures | N/A | null | # A small survey of mathematical abilities of modern transformer architectures[∗]
Bartosz Piotrowski
University of Warsaw, Poland, and Czech Technical University, Prague
**Introduction** Neural networks (NNs) are versatile tools which established state-of-the-art
in multiple domains. In particular, one of the spectacular advances achieved with use of NNs
has been in natural language processing (NLP). Today, the dominating kind of a neural model
used in this domain is based on the transformer architecture [10]. It was also observed that
neural architectures designed for NLP have ability to deal with tasks of symbolic (or algorithmic)
nature. These include: recognizing propositional entailment [2], computing integrals [4], solving
differential equations [1], normalizing polynomials [6], autoformalization [11], premise selection
[5], differentiation, solving linear equations, number base conversion, and many others [9].
It is not well understood how neural models are able to perform algorithmic tasks well. It
is also unclear what features of a neural architecture make it more suitable for such tasks. In
this work, we make a step towards understanding this. We compare two different architectures
– encoder-decoder versus decoder-only – and two different modes of training – starting from
scratch versus fine-tuning a model pre-trained on a natural language dataset. We also want to
see what is performance of a modern transformer model trained in a practical, limited setting:
training for no more than two days on a single GPU.
**Data** We took 8 different datasets representing mathematical tasks of varied difficulty: addition, multiplication, differentiation, integration, solving linear equations, division, number base
conversion, and normalizing polynomials. The first two were created for the purpose of this
work and the remaining six were taken from other works [9, 4, 6]. Each dataset consists of
_input-output examples, where input is a query to the model and output in an answer that the_
model is trained to produce. For each of the datasets a hold-out testing set of 10000 examples
was drawn. Below there are examples of input-output pairs for the linear equations dataset:
input output
```
Solve - 3 8 * h - 6 * h + 4 7 8 + 4 0 2 = 0 for h . 9
Solve 2 9 * i + 1 3 0 0 = - 3 * i + 4 1 * i - 7 4 * i for i . - 2 0
Solve 1 0 4 9 * d = 4 3 1 2 + 5 1 2 9 for d . - 4 5
```
We experimentally established that treating single digits as tokens is better then taking whole
numbers as tokens, and we preprocessed all the datasets accordingly.
**Transformer models** We compare two different state-of-the-art transformer architectures:
1. GPT2 [7]: a decoder-only architecture with 124 million of trainable parameters.
2. T5 [8]: an encoder-decoder architecture (closely following the original transformer model
described in [10]). We use the T5-small version of this model with 60 million parameters.
Both GPT2 and T5 proved to perform very well on a range of NLP tasks. For both of them
there are available high-quality pre-trained checkpoints released by the authors of the models.[1]
_∗The author was supported by the grant of National Science Center, Poland, no. 2018/29/N/ST6/02903._
[1They are available in Huggingface: https://huggingface.co/gpt2, https://huggingface.co/t5-small](https://huggingface.co/gpt2)
-----
A small survey of mathematical abilities of modern transformer architectures Piotrowski
**dataset** **T5** **GPT2**
pretrained untrained pretrained untrained
addition 86.74% 96.95% 98.60% 99.26%
multiplication 24.10% 47.58% 46.54% 68.00%
division 67.23% 70.98% 72.62% 77.16%
number base conversion 0.03% 2.58% 1.63% 3.52%
solving linear equations 37.56% 17.62% 45.57% 47.40%
differentiation 98.84% 95.05% 99.80% 99.75%
integration 26.65% 35.88% 79.70% 81.80%
polynomial normalization 58.13% 90.83% 89.35% 92.93%
Table 1: Final testing accuracy of neural language models tested on the eight datasets.
**Experimental setup** We perform the experiments using the Huggingface framework [12].
In each experiment we train with the Adam optimizer [3] with parameters: learning rate =
1e-5, β1 = 0.9, β2 = 0.999, ϵ =1e−8, weight decay = 0. When we fine-tune a pre-trained
model, we must use a tokenizer that comes along with the model – in cases of both GPT2
and T5 these are pre-trained byte pair encoding tokenizers. When training from scratch we
use a simple tokenizer splitting on whitespaces. All trainings were performed using GeForce
GTX 2080 Ti GPUs. We limit all the trainings to passing through a model 64 million training
examples.[2] All data and scripts required to reproduce the results presented here are available
[at https://github.com/BartoszPiotrowski/transformers-for-mathematics](https://github.com/BartoszPiotrowski/transformers-for-mathematics)
**Results and conclusions** Figure 1 shows training curves for one of the datasets – linear
equations. Table 1 shows the final testing accuracy for all the tasks. There are two conclusions:
1. In almost all cases, the pre-trained versions of models performed worse than the models
trained from scratch. It likely means that the data on which the models were pre-trained
does not contain much information relevant for dealing with mathematical problems.
There are, however, two exceptions: for T5 and datasets on differentiation and solving
linear equations. Especially for the latter the difference is much in favour of the pretrained version of the model. As for now, we do not have explanation for this.
2. GPT2 performed better than T5 for all the datasets. It means that decoder-only architectures are capable of learning mathematical tasks, despite the fact that in most of the cited
related works encoder-decoder architectures were used. However, it is unclear whether the
superior performance of GPT2 was due to the different architecture, or possibly because
of larger number of trainable parameters. Further experiments would be needed.
|Col1|Loss 1 model 0 GPT2−pretrained 1 GPT2−untrained T5−pretrained 2 T5−untrained 3 0ee++0206e0+40e6+606e6e++0066 tterpaining_step|
|---|---|
|Accuracy 1.0 1e+0 0.8 1e+0 accuracy loss 0.6 1e−0 0.4 1e−0 0.2 1e−0 0.0 0e+00 2e+06 4 training_s|1 0 1 2 3 0ee++0206e0+40e6+606e6e++0066 tterpaining_step|
Loss
3.0
loss
1.0
0.3
0e+00 2e+06 4e+06 6e+06
training_step
Figure 1: Training loss and accuracy on the linear equations dataset.
2This is a practical limit – full training takes then, depending on a dataset, between 4 and 50 hours.
-----
A small survey of mathematical abilities of modern transformer architectures Piotrowski
## References
[1] Fran¸cois Charton, Amaury Hayat, and Guillaume Lample. Learning advanced mathematical computations from examples. In 9th International Conference on Learning Representations, ICLR
_2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021._
[2] Richard Evans, David Saxton, David Amos, Pushmeet Kohli, and Edward Grefenstette. Can neural
networks understand logical entailment? In International Conference on Learning Representations,
2018.
[3] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua
Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR
_2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015._
[4] Guillaume Lample and Fran¸cois Charton. Deep learning for symbolic mathematics. In 8th In_ternational Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April_
_26-30, 2020. OpenReview.net, 2020._
[5] Bartosz Piotrowski and Josef Urban. Stateful premise selection by recurrent neural networks. In
Elvira Albert and Laura Kov´acs, editors, LPAR 2020: 23rd International Conference on Logic for
_Programming, Artificial Intelligence and Reasoning, Alicante, Spain, May 22-27, 2020, volume 73_
of EPiC Series in Computing, pages 409–422. EasyChair, 2020.
[6] Bartosz Piotrowski, Josef Urban, Chad E. Brown, and Cezary Kaliszyk. Can neural networks
learn symbolic rewriting? CoRR, abs/1911.04873, 2019.
[7] Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language
models are unsupervised multitask learners. 2019.
[8] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena,
Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified
text-to-text transformer. J. Mach. Learn. Res., 21:140:1–140:67, 2020.
[9] David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical
reasoning abilities of neural models. In 7th International Conference on Learning Representations,
_ICLR 2019, New Orleans, LA, USA, May 6-9, 2019. OpenReview.net, 2019._
[10] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez,
Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Isabelle Guyon, Ulrike von
Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman
Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on
_Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages_
5998–6008, 2017.
[11] Qingxiang Wang, Cezary Kaliszyk, and Josef Urban. First experiments with neural translation
of informal to formal mathematics. In Florian Rabe, William M. Farmer, Grant O. Passmore,
and Abdou Youssef, editors, 11th International Conference on Intelligent Computer Mathematics
_(CICM 2018), volume 11006 of LNCS, pages 255–270. Springer, 2018._
[12] Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony
Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, and Jamie Brew. Huggingface’s
transformers: State-of-the-art natural language processing. CoRR, abs/1910.03771, 2019.
-----
| [
"Bartosz, Piotrowski"
] | 2022-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word Problems | Chain-of-Thought (CoT) prompting has enhanced the performance of Large Language Models (LLMs) across various reasoning tasks. However, CoT still falls short in dealing with complex math word problems, as it usually suffers from three pitfalls: semantic misunderstanding errors, calculation errors and step-missing errors. Prior studies involve addressing the calculation errors and step-missing errors, but neglect the semantic misunderstanding errors, which is the major factor limiting the LLMs' performance. To this end, we propose a simple-yet-effective method, namely Deeply Understanding the Problems (DUP), to improve the LLMs' math problem-solving ability by addressing semantic misunderstanding errors. The core of our method is to encourage the LLMs to deeply understand the problems and extract the key problem-solving information used for better reasoning. Extensive experiments on 10 diverse reasoning benchmarks show that our DUP method consistently outperforms the other counterparts by a large margin. More encouragingly, DUP achieves a new SOTA result on the GSM8K benchmark, with an accuracy of 97.1% under zero-shot setting. | null | ## Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word Problems
**Qihuang Zhong[1][*], Kang Wang[1][∗], Ziyang Xu[1], Juhua Liu[1][†], Liang Ding[2]**
**Bo Du[1][†], Dacheng Tao[3]**
1Wuhan University 2The University of Sydney
3Nanyang Technological University
{zhongqihuang, kangwang319,liujuhua}@whu.edu.cn, [email protected]
**Abstract**
Chain-of-Thought (CoT) prompting has enhanced the performance of Large Language
Models (LLMs) across various reasoning tasks.
However, CoT still falls short in dealing with
complex math word problems, as it usually
suffers from three pitfalls: semantic misunderstanding errors, calculation errors and stepmissing errors. Prior studies involve addressing the calculation errors and step-missing errors, but neglect the semantic misunderstanding
errors, which is the major factor limiting the
LLMs’ performance. To this end, we propose
a simple-yet-effective method, namely Deeply
_Understanding the Problems (DUP)[1], to im-_
prove the LLMs’ math problem-solving ability
by addressing semantic misunderstanding errors. The core of our method is to encourage
the LLMs to deeply understand the problems
and extract the key problem-solving information used for better reasoning. Extensive experiments on 10 diverse reasoning benchmarks
show that our DUP method consistently outperforms the other counterparts by a large margin. More encouragingly, DUP achieves a new
SOTA result on the GSM8K benchmark, with
an accuracy of 97.1% under zero-shot setting.
_-21_
_-9_
_-8_
Figure 1: Error analysis of GSM8K problems with
**incorrect answers returned by zero-shot CoT and**
**our DUP using GPT-3.5 LLM. We randomly sample**
300 GSM8K problems, and follow (Wei et al., 2022)
and (Wang et al., 2023a) to assign the “Semantic Misunderstanding”, “Calculation Error” and “Step-missing
Error” to each incorrect answer. We see that our DUP
_method effectively reduces the errors among all types._
process a person might employ in solving a task.
Such a simple strategy can significantly improve
the reasoning ability of LLMs, and thus has attracted widespread attention in recent years.
Along this research line, many works focus on
designing prompting strategies to enhance LLM’s
reasoning ability, such as Zero-shot CoT (Kojima et al., 2022), Tree of Thought (Gao et al.,
2023), Plan-and-Solve (PS) prompting (Wang et al.,
2023a), and Complex CoT (Fu et al., 2023). Although achieving remarkable progress, they still
fall short in dealing with complex reasoning tasks,
_e.g., math word problems. As stated by Wei et al._
(2022), there are three main error types in the
context of CoT-based reasoning: semantic misun**_derstanding errors, calculation errors, and step-_**
**_missing errors. In our preliminary experiments_**
(as shown in Figure 1), we found that CoT has
major errors in semantic understanding, which is
the main factor limiting LLMs’ reasoning perfor
**1** **Introduction**
Despite the impressive performance of Large
Language Models (LLMs) in diverse NLP
tasks (Brown et al., 2020; Touvron et al., 2023; OpenAI, 2023), they often suffer from sub-optimal reasoning abilities, which cannot be overcome solely
by simply scaling up the model size (Rae et al.,
2021; Wang et al., 2023b). To tackle this limitation, Wei et al. (2022) propose a few-shot Chain-ofThought (CoT) prompting strategy, which prompts
the LLMs to mimic the given step-by-step thought
- Equal contribution: Qihuang Zhong and Kang Wang
contributed equally to this work.
† Corresponding Authors: Juhua Liu (e-mail: [email protected]), Bo Du (e-mail: [email protected]).
[1https://github.com/WHU-ZQH/DUP](https://github.com/WHU-ZQH/DUP)
-----
mance. Prior studies (Wang et al., 2023a; Chen
et al., 2023a) show that the carefully-designed
prompting strategies can achieve much fewer calculation errors and step-missing errors, but still
struggle to address the major semantic misunderstanding. Hence, there raises a question: whether
**_we can enhance the LLMs’ reasoning abilities by_**
**_reducing the semantic misunderstanding errors?_**
Intuitively, since complex math word problems
usually contain content irrelevant for solving the
task, LLMs might fail to identify the core question
and extract the relevant problem-solving information, thus leading to semantic misunderstanding
and poor performance. This can be also proved by
the findings in psychology, as prior studies (Hoyer
et al., 1979; Pasolunghi et al., 1999) show that the
irrelevant information may significantly decrease
some children and even adults problem-solving accuracy. Hence, this inspires us that, it is crucial
_to enforce the LLMs to pay more attention to the_
_core information and reduce the negative effects of_
_irrelevant information._
Motivated by this, we propose a simple-yeteffective method, namely Deeply Understanding
_the Problems (DUP), to improve the LLMs’ math_
problem-solving ability. The principle of our
method is akin to the human learning process, i.e.,
for human students who received a complex math
word problem, they will read and comprehend the
text of the problem, identify the core question that
needs to be answered, and finally solve it with relevant problem-solving information. Specifically,
DUP consists of three stages: ❶ Revealing the core
question of the input problem; ❷ Extracting the
problem-solving information relevant for solving
the core question; ❸ Generating and extracting the
final answer by combining the core question with
problem-solving information. By doing so, LLMs
can filter out irrelevant information and achieve
better math reasoning performance.
We conduct a series of experiments on 10 reasoning datasets across math, commonsense, and
symbolic reasoning benchmarks. The experimental
results of GPT-3.5-Turbo (Ouyang et al., 2022) and
GPT-4 (OpenAI, 2023) show that: 1) DUP consistently outperforms the other counterparts across
all datasets by a large margin; 2) Zero-shot DUP
can even outperform the few-shot methods on most
reasoning datasets; 3) More encouragingly, DUP
achieves new SOTA results on the popular GSM8K
(97.1%) and SVAMP (94.2%).
**Contributions.** To summarize, our contributions
are three-fold: (1) We reveal the underlying causes
of semantic misunderstanding errors, and propose
a simple yet effective approach (DUP) to effectively address the semantic misunderstanding and
boost LLMs’ math reasoning ability. (2) DUP is
easy-to-implement and plug-and-play. It can be
easily applied to various LLMs. (3) Extensive experiments show that DUP outperforms the other
counterparts by a large margin, and achieves new
SOTA results on GSM8K and SVAMP.
**2** **Related Works**
**2.1** **Reasoning with Large Language Models**
In recent years, we have witnessed numerous large
language models (LLMs) (Devlin et al., 2019;
Brown et al., 2020; Chowdhery et al., 2022; Zhong
et al., 2022; OpenAI, 2023; Touvron et al., 2023)
that achieved tremendous success in various natural language understanding and generation tasks.
However, LLMs usually struggle to provide stable
and accurate answers when dealing with reasoning
tasks (Zhang et al., 2023a), such as math reasoning (Cobbe et al., 2021; Patel et al., 2021; Ling
et al., 2017; Hosseini et al., 2014), commonsense
reasoning (Talmor et al., 2019; Geva et al., 2021)
and symbolic reasoning (Wei et al., 2022). Recent
works (Yuan et al., 2023; Luo et al., 2023; Yu et al.,
2023) have shown that reasoning-augmented LLMs
tuning with mathematical data can relatively improve reasoning ability. However, even with such
progress, these models still perform poorly in complex reasoning problems. This indicates that there
is still significant room for improving the LLMs’
performance in complex reasoning tasks.
**2.2** **Prompting Methods**
Despite the remarkable performance, the aforementioned training-bases approaches usually require
collecting large amounts of data and expensive
computational costs, and may cause LLMs universal ability to decrease. Hence, some works (Wei
et al., 2022; Kojima et al., 2022) attempt to use
the cheaper prompting methods to strengthen the
LLMs’ reasoning abilities without additional training. Wei et al. (2022) are the first to propose the
few-shot CoT prompting, which elicits a series of
intermediate natural language reasoning steps before giving the final answer. So far, CoT prompting
has been proven to significantly improve the reasoning capability of LLMs. Along this research line, a
-----
**Stage1: Reveal the Core Question**
**Q: Raymond and Samantha are cousins. Raymond was born 6 years before Samantha. Raymond had**
a son at the age of 23. If Samantha is now 31, how many years ago was Raymond's son born?
**Please extract the core question, only the most comprehensive and detailed one!**
**A: How many years ago was Raymond's son born?**
**(a)** **Core question**
**Stage2: Extract the Problem-solving Information**
**Q: Raymond and Samantha are cousins ...how many years ago was Raymond's son born?**
**Note:** **Please extract the question-solving information related to the problem (How many years ago**
was Raymond's son born?), only extract the most useful information, and list them one by one!
**A:** 1. Raymond was born 6 years before Samantha. 2. Samantha is now 31. 3. Raymond had a son at
the age of 23.
**Problem-solving**
**(b)**
**information**
**Stage3: Generate and Extract the Answer**
**Q: Raymond and Samantha are cousins ...how many years ago was Raymond's son born?**
Hint: 1. Raymond was born 6 years before Samantha. 2. Samantha is now 31. 3. Raymond had a son
at the age of 23.
How many years ago was Raymond's son born?
**Please understand the Hint and question information, then solve the question step by step and show**
**the answer.**
**A: Raymond is 6 years older than Samantha, so he is now 31 + 6 = 37 years old. Raymond had a son**
when he was 23, so his son was born 37 - 23 = 14 years ago. **The answer is: 14**
**(c)** **Final answer**
**Illustration of our DUP prompting strategy, which contains three-stage processes: ❶** revealing the
core question from the original input; ❷ extracting the problem-solving information based on the core question; ❸
generating and extracting the final answer via understanding the core question and problem-solving information.
large amount of works (Zhou et al., 2023; Wang to reveal the underlying causes of semantic mis, 2023a; Yao et al., 2023; Zhang et al., 2023b; understanding errors and provide a new view for
Chen et al., 2023b; Xu et al., 2023) attempt to care- addressing these errors, which can promote more
fully design more effective prompting strategies to related research in this field.
improve the reasoning ability of LLMs. Unfortunately, these prompt methods achieve remarkable
performance, but still fail to deal with complex reasoning tasks, e.g., math word problems. As stated
by (Wei et al., 2022), the reasoning mistakes of
LLMs can be classified into three categories: semantic misunderstanding errors, calculation errors,
and step-missing errors. Some prior works (Wang
et al., 2023a; Chen et al., 2023a) attempt to reduce
these errors, and achieve some performance improvements. However, they mainly focus on the
calculation errors and step-missing errors, but neglect the major semantic misunderstanding errors.
That is, it is critical but under-explored to study
how to address the semantic misunderstanding.
**Novelty.** In this paper, we are inspired by the
human learning process and propose to enforce
the LLMs to deeply understand the problems and
pay more attention to the core information relevant
for solving the problems. Although such a simple
prompting method might not introduce too many
new technologies, we are one of the rare works
to reveal the underlying causes of semantic misunderstanding errors and provide a new view for
addressing these errors, which can promote more
related research in this field.
**3** **DUP Prompting**
**Overview.** As mentioned in Section 1, semantic
misunderstanding is the major error for limiting
LLMs’ reasoning performance, which has not been
well studied in prior works. To this end, we introduce a new zero-shot CoT prompting approach,
called DUP prompting, which aims to improve the
LLMs’ reasoning abilities by enforcing the LLMs
to fully understand the problem. Figure 2 illustrates the process of our DUP method, which contains three-stage processes. Specifically, in stage
1, DUP reveals the core question from complex
and lengthy problem description. In stage 2, DUP
further extracts the problem-solving information
that is crucial for solving the core question from
the same description. In stage 3, given the core
question and problem-solving information, DUP
incorporates them into the original question to generate detailed response, and then extracts the final
**answer from the generated text.**
-----
**3.1** **Stage 1: Reveal the Core Question**
Understanding the goal of the question is the first
step to solving it, even for humans. Unfortunately,
LLMs might be confused by lengthy description of
complex reasoning question, leading to inaccurate
understanding and poor performance. In response
to this problem, we encourage LLMs to explicitly
extract the core question from the original input
before reasoning. Specifically, we design a core
question extraction prompt “Please extract core
**_question, only extract the most comprehensive_**
**_and detailed one!”, which is appended to the end_**
of question. We then use GPT-3.5-turbo (Ouyang
et al., 2022) to extract the core question from the
input. As a result, the output of this step will be
a shorter and clearer question that will be used to
help LLMs focus on the goal of input questions in
subsequent steps.
**3.2** **Stage 2: Extract the Problem-solving**
**Information**
In addition to clarifying the goal, it is also important to find the information required to solve
the problem. Without fully understanding and
utilizing the information provided by the question, reasoning cannot be correctly proceed. Moreover, it is difficult for LLMs to take full advantage of this information. Therefore, we design a
problem-solving information extraction prompt to
help solve this problem, i.e., “Note: Please extract
**_the problem-solving information related to the_**
**_core question [Core Question info], Only_**
**_extract the most useful information, list them one_**
**_by one!”. The slot [Core Question info]_**
contains the core question extracted in Stage 1. The
output of this step is a list of information, which is
useful in reasoning.
**3.3** **Stage 3: Generate and Extract the Answer**
Given the core question and problem-solving information extracted in previous stages, we incorporate them into the original input by the template
“Hint: [Problem-Solving Info]\n[Core
**Question]\n Please understand the Hint and**
**_question information, then solve the problem step_**
**_by step and show the answer.”, where the input_**
slots refer to the corresponding outputs in previous
steps. This prompt is beneficial to improve LLMs’
understanding of the question by explicitly pointing
out the goal and necessary information to solve the
question. Lastly, following the prior work (Wang
Dataset Domain # Samples Answer Format
GSM8K Math 1319 Number
MultiArith Math 600 Number
AddSub Math 395 Number
SVAMP Math 1000 Number
SingleEq Math 508 Number
AQuA Math 254 Option
Last Letters Symbolic 500 String
Coin Flip Symbolic 500 Yes / No
StrategyQA Commonsense 2290 Yes / No
CSQA Commonsense 1221 Option
Table 1: Details of all evaluated datasets. “Math”,
“Symbolic” and “Commonsense” denote the arithmetic,
symbolic and commonsense reasoning, respectively.
CSQA refers to the CommonensenseQA benchmark.
et al., 2023a), we enforce the LLMs to extract the
final numerical answer from the generated long reasoning text. Compared with rule-based matching
methods, using LLMs to extract the final answer is
more robust and accurate in practice. More details
of extracting answer can be found in Appendix A.1.
**4** **Experiments**
**4.1** **Setup**
**Tasks and Datasets.** We conduct extensive experiments on 6 Arithmetic Reasoning benchmarks, including GSM8K (Cobbe et al., 2021),
SVAMP (Patel et al., 2021), MultiArith (Roy
and Roth, 2015), AddSub (Hosseini et al., 2014),
AQuA (Ling et al., 2017) and SingleEq (KoncelKedziorski et al., 2015). Moreover, to investigate the universality of our DUP, we also evaluate it on several reasoning tasks in the other domains, i.e., 2 Commonsense Reasoning benchmarks (CommonsenseQA (Talmor et al., 2019),
StrategyQA (Geva et al., 2021)) and 2 Symbolic
**Reasoning benchmarks (Last Letter (Wei et al.,**
2022), Coin Flip (Wei et al., 2022)). The details of
all evaluated datasets are shown in Table 1.
**Compared Methods.** Since our DUP is zeroshot prompting method, we mainly compare it with
other zero-shot methods. For references, two typical few-shot prompting methods are also used as
the baselines.
- Zero-shot CoT (Kojima et al., 2022) simply
adds a prompt “Let’s think step by step” before each answer.
- Least-to-Most (Zhou et al., 2023) aims to
break down a complex problem into a series
-----
**Arithmetic Reasoning** **Score**
**Model** **Method**
**SVAMP** **GSM8K** **AddSub** **MultiArith** **AQuA** **SingleEq** **_Avg._** ∆
_Performance of Zero-shot Methods_
Zero-shot CoT 79.3 78.9 85.8 95.3 53.0 93.5 80.9 **-**
Least-to-Most 80.9 77.5 91.3 95.5 57.4 93.5 82.6 +1.7
GPT-3.5-Turbo
Zero-shot PS+ 80.7 79.3 86.5 92.0 55.9 93.0 81.2 +0.3
DUP (Ours) **82.5** **82.3** **92.1** **97.8** **60.2** **94.9** **84.9** **+4.0**
Zero-shot CoT 90.4 94.6 92.4 97.8 72.8 95.0 90.6 **-**
Least-to-Most 90.3 92.1 92.1 97.1 71.6 95.0 89.7 -0.9
GPT-4
Zero-shot PS+ 92.6 94.3 93.1 **98.1** 75.5 95.3 91.4 +0.8
DUP (Ours) **94.2** **97.1** **95.1** **98.1** **77.1** **96.0** **92.9** **+2.3**
_Performance of Few-shot Methods_
Manual-CoT 78.5 81.6 90.6 95.6 55.9 94.2 82.6 +1.7
GPT-3.5-Turbo
Auto-CoT 82.9 80.2 89.9 99.0 54.3 94.6 83.4 +2.5
Table 2: Results on Arithmetic Reasoning benchmarks. The best results in the zero-shot setting are in bold. “∆ ”
denotes the average performance improvement or decline of various methods compared to Zero-shot CoT.
of simpler sub-problems and then solve them
in sequence.
- Plan-and-Solve (Wang et al., 2023a)[2] devises
a plan to divide the entire task into smaller
sub-tasks, and then carries out the sub-tasks
according to the plan.
- Manual-CoT (Wei et al., 2022) is the first CoT
method that proposes to use a few CoT demonstrations as exemplars in prompting.
- Auto-CoT (Zhang et al., 2023b) improves the
vanilla CoT via sampling questions with diversity and generating reasoning chains to construct demonstrations.
**Implementation Details.** We use the public GPT3.5-Turbo (0613) (Ouyang et al., 2022) and GPT-4
(0613) (OpenAI, 2023) as the test LLMs. In this
work, all models are employed via OpenAI’s API,
and we adopt the greedy decoding strategy with
the temperature setting of 0 across all experiments.
For the few-shot prompting baselines, we keep the
recommended number of demonstration examples
specified in their original papers.
**4.2** **Main Results**
**Arithmetic Reasoning.** Table 2 presents the
main results of Arithmetic Reasoning benchmarks.
As seen, compared to the vanilla zero-shot CoT, our
DUP method brings consistent and significant performance gains across all reasoning benchmarks.
2We adopt the more sophisticated Plan-and-Solve (PS+)
prompting with more detailed instructions in this work.
**Method** **CSQA StrategyQA Avg.**
Zero-shot CoT 72.3 66.1 69.2 -
Least-to-Most 71.9 61.5 66.7 -2.5
Zero-shot PS+ 68.8 62.8 65.8 -3.4
DUP (Ours) **74.5** **68.5** **71.5 +2.3**
Few-shot Manual-CoT 76.5 64.8 70.8 +1.6
Few-shot Auto-CoT 74.2 62.5 68.3 -0.9
Table 3: Results of Commonsense Reasoning bench**marks. Here, GPT-3.5-turbo is used as the reasoner.**
Specifically, in GPT-3.5-turbo settings, DUP improves the accuracy by an average of 4% over
Zero-shot CoT. When using GPT-4, our DUP even
achieves new state-of-the-art results on GSM8K
**(97.1%) and SVAMP (94.2%).**
Moreover, we also report the results of few-shot
counterparts. Due to the high cost of GPT-4 API,
we use the more affordable GPT-3.5-turbo as the
responder for few-shot methods. Generally, the performance of zero-shot methods tends to be lower
than that of few-shot methods. However, with the
help of our DUP, GPT-3.5 can even achieve remarkable zero-shot performance that is higher than fewshot methods. These results prove the effectiveness
of our DUP method.
**Commonsense and Symbolic Reasoning.** Table 3 shows the performance on Commonsense
Reasoning datasets. Considering the experimental
cost, we only used GPT-3.5-turbo as the backbone
LLM. Compared to zero-shot methods, our DUP
method consistently outperforms all other counterparts. In comparison with few-shot methods,
our DUP also achieves comparable or even better
-----
**Method** **Last Letter Coin Flip Avg.** ∆
Zero-shot CoT 60.8 94.4 77.6 -
Least-to-Most **83.2** 82.8 83.0 +2.4
Zero-shot PS+ 60.6 95.4 78.0 +0.4
DUP (Ours) 81.2 **97.6** **89.4 +11.8**
Few-shot Manual-CoT 74.4 98.2 86.3 +8.7
Few-shot Auto-CoT 81.2 98.6 89.9 +12.3
Table 4: Results of Symbolic Reasoning benchmarks.
We also use the GPT-3.5-turbo as the reasoner.
**Stage 1 Stage 2 Stage 3 GSM8K AQuA Avg.**
% % % 76.5 51.2 63.8
! % % 78.9 53.1 66.0
% ! % 80.6 55.1 67.8
% % ! 80.3 54.7 67.5
! ! % 79.9 57.0 68.4
! % ! 80.8 56.2 68.5
% ! ! 81.7 58.2 69.9
! ! ! **82.3** **60.2** 71.2
Table 5: **Ablation study for different variations**
**of DUP prompting using GPT-3.5-turbo LLMs on**
GSM8K and AQuA datasets. Notably, Stage 1 involves
extracting core questions, Stage 2 focuses on extracting problem-solving information, and Stage 3 entails
solving the problem step by step.
performance.
Table 4 lists the results on Symbolic Reasoning datasets. On Last Letters, zero-shot DUP
(81.2%) is marginally worse than Zero-shot Leastto-Most (83.2%), on par with few-shot Auto-CoT
(81.2%), but significantly exceeds other Zero-shot
approaches and few-shot Manual-CoT (74.4%). On
Coin Flip, zero-shot DUP (97.6%) is slightly worse
than few-shot Manual-CoT (98.2%) and few-shot
Auto-CoT (98.6%), but significantly outperforms
other zero-shot baseline methods. In general, we
can basically conclude that our DUP outperforms
other zero-shot counterparts, and has great potential to beat the few-shot methods.
**4.3** **Ablation Study**
In this part, we conduct a series of ablation experiments to investigate 1) the impact of each stage in
our DUP, and 2) how to reduce the inference costs
and maintain the performance.
**Impact of different stages in our DUP.** In Table 5, we report the results of various combinations
of the three stages in our DUP. As seen, remov
Figure 3: Performance of DUP and DUP-s across
**various reasoning tasks on GPT-3.5-Turbo, where**
DUP-s merges the three-stage prompts into one prompt.
**Orange and Blue dashlines represent the average ac-**
curacy of DUP and DUP-s, respectively. We see that
our simplified DUP-s method also achieves remarkable
performance with less inference budget.
**+2.2** **+2.5**
**+3.4** **+3.2**
**(a) GSM8K** **(b) SVAMP**
Figure 4: Results of DUP Prompting with and with**out self-consistency(SC) using GPT-3.5-turbo LLM on**
GSM8K and SVAMP.
ing each stage results in performance degradation,
and the combination of all stages achieves the best
performance on GSM8K and AQuA benchmarks.
These results demonstrate the importance of each
stage in our DUP.
**Reduce inference cost without much perfor-**
**mance degradation.** Some readers may concern
that the three-stage processes in our DUP will cause
too much inference cost. Hence, we further propose the simplified DUP method, namely DUPs, which merges the three-stage prompts into one
prompt. We conduct contrastive experiments on all
10 reasoning benchmarks, and illustrate the results
in Figure 3. It can be found that on most tasks,
DUP-s achieves comparable performance to DUP,
and even achieves better performance on two tasks
of Addsub and SingleEQ. Therefore, in the case
of limited inference budget, using our simplified
DUP-s method is also a good choice.
-----
|54|Col2|
|---|---|
|54 Zero-shot CoT Zero-shot DUP (Ours) 33 18 9 10 2||
|||
|||
|Col1|Col2|
|---|---|
|||
|72|Col2|Col3|
|---|---|---|
|72 Zero-shot CoT 61 Zero-shot DUP (Ours) 30 21 11 7|||
||||
|30|Col2|
|---|---|
|||
|||
|Col1|Col2|
|---|---|
|||
|Col1|Col2|
|---|---|
|10 Zero-shot CoT Zero-shot DUP (Ours) 3 3 3 2 1|Col2|Col3|
|---|---|---|
||||
|Col1|Col2|
|---|---|
|||
|Col1|Col2|
|---|---|
|Col1|66 Zero-shot DUP (Ours) 0 0 0 0|
|---|---|
|3|30|0 Zero-shot CoT|
|---|---|---|
|||Zero-shot DUP (Ours) 5 0 0 1 1|
||||
|5|59|9 Zero-shot CoT|
|---|---|---|
|||Zero-shot DUP (Ours) 40 3 3 1 0|
||||
|Col1|Col2|25 Zero-shot DUP (Ours) 1 1 0 0|
|---|---|---|
“Semantic Misunderstanding”, “Calculation Error” and “Step-missing Error”. We also randomly select 300 examples
6050 54 Zero-shot CoTZero-shot DUP (Ours) 80 7261 Zero-shot CoTZero-shot DUP (Ours) 2520 20 Zero-shot CoTZero-shot DUP (Ours) 1210 10 Zero-shot CoTZero-shot DUP (Ours)
60
40 33 15 8
30 40 6
30
20 18 21 10 4 3 3 3
Error Count (Num.) 10 9 10 20 11 7 5 5 5 3 3 2 2 1
2 1
0 0 0 0
SM CE SE SM CE SE SM CE SE SM CE SE
(a) GSM8K (b) AQuA (c) MultiArith (d) SingleEq
80 74 Zero-shot CoT 30 Zero-shot CoT 59 Zero-shot CoT 30 27 Zero-shot CoT
66 Zero-shot DUP (Ours) 30 Zero-shot DUP (Ours) 60 Zero-shot DUP (Ours) 25 Zero-shot DUP (Ours)
60
40 20
20 40
40
10 20 10
20 5
Error Count (Num.)
0 0 0 0 0 0 1 1 3 3 1 0 1 1 0 0
0 0 0 0
SM CE SE SM CE SE SM CE SE SM CE SE
(e) CSQA (f) Coin Flip (g) SVAMP (h) AddSub
Figure 5: Quantitative error analyses of different prompting methods. Notably, “SM”, “CE” and “SE” denote the
from different reasoning task datasets (except AQuA that only contains 254 examples), and use GPT-3.5-Turbo
LLM to generate responses and count failed answers. We can see that our method reduces the frequency of various
error types compared with Zero-shot CoT.
|GPT-3.5 as extractor GPT-4 as extractor|Col2|
|---|---|
|LLaMA2-Chat 70B as extractor||
|||
|||
|GPT-3.5 as extractor GPT-4 as extractor|Col2|
|---|---|
|LLaMA2-Chat 70B as extractor||
|||
|||
|||
(a) GSM8K (b) SVAMP DUP (Ours) **56.4** **87.8** **72.1** **+7.4**
86 90
GPT-3.5 as extractor GPT-3.5 as extractor
84 GPT-4 as extractorLLaMA2-Chat 70B as extractor 8886 GPT-4 as extractorLLaMA2-Chat 70B as extractor
82 84
80 82
Accuracy (%) 80
78
78
76 76
Stage 1 Stage 2 Stage 1,2 Stage 1,2,3 Stage 1 Stage 2 Stage 1,2 Stage 1,2,3
(a) GSM8K (b) SVAMP
Figure 6: Analysis of different information extractors
**used in our DUP. We use the GPT-4, GPT-3.5-turbo,**
and Llama-2-Chat 70b to extract core question (Stage1)
and problem-solving information (Stage2) extractor, and
leverage the extracted contents to guide the responses
of GPT-3.5-turbo (Stage3). We see that more accurate
core questions and problem-solving information lead to
better performance.
**4.4** **Discussion and Analysis**
**Compatibility with Self-consistency.** We employ an innovative decoding strategy with selfconsistency (SC) (Wang et al., 2023b) as a substitute for the conventional greedy decoding approach, which initially samples N reasoning paths
rather than only opting for the greedy approach.
Subsequently, choosing the most consistent answer as the answer. Existing works (Wang et al.,
2023a; Xu et al., 2023) indicate that adopted SC
notably enhances the performance of chain-ofthought prompting. Here, to verify whether using
SC can further enhance the performance of DUP,
we conduct experiments on GSM8K and SVAMP
using GPT-3.5-Turbo, setting the temperature to
_CodeLlama-Instruct 13b_
Zero-shot CoT 24.2 73.1 48.6 -
DUP (Ours) **28.1** **74.6** **51.3** **+2.7**
_CodeLlama-Instruct 34b_
Zero-shot CoT 39.1 81.2 60.1 -
DUP (Ours) **43.5** **86.0** **64.7** **+4.1**
Table 6: Results of various Open-source LLMs on
**GSM8K and Addsub. We see that our DUP method**
still achieves much better performance than the baseline
Zero-shot CoT among all open-source LLMs.
0.7 and N to 10. The results are illustrated in Figure 4, where the SC strategy brings remarkable
performance improvements. Notably, DUP with
SC (88.6% and 88.8%) consistently outperforms
Zero-shot CoT with SC (86.4% and 86.3%), continuing to prove the superiority of our DUP method.
**Whether DUP also works well on Open-source**
**LLMs.** In the above experiments, we mainly
evaluate our DUP in the close-source GPT LLMs.
To verify whether our DUP also works well on
other open-source LLMs, we evaluate our method
on 4 widely-used LLMs, i.e., Llama-2-Chat 13b
-----
**Problem Description**
They decided to hold the party in their backyard. They have 10 sets of tables and each set has 6 chairs. If there are
11 people sitting on chairs. How many chairs are left unoccupied?
**Zero-shot CoT Reasoning**
Sure! First, let's calculate the total number of chairs available. There are 10 sets of tables, and each set has 6 chairs,
**so the total number of chairs is 10 sets * 6 chairs/set = 60 chairs.**
**Zero-shot DUP Reasoning**
**Stage1. Core Question: If there are 11 people sitting on chairs at a party with 10 sets of tables, each set having 6**
chairs, how many chairs are empty?
**Stage2. Problem-Solving info: 1. 10 sets of tables. 2. Each set has 6 chairs. 3. 11 people sitting on chairs11 people sitting on chairs.**
**Stage3. Generate the Answer:We can follow these steps: 1. Total chairs = 10 sets of tables * 6 chairs per set = 60**
chairs. 2. Number of chairs occupied = 11 people sitting on chairs. 3. Number of chairs left unoccupied = Total Number of chairs left unoccupied = Total
chairs - Number of chairs occupied = 49 chairs. Therefore, 49 chairs are left unoccupied at the party.chairs - Number of chairs occupied = 49 chairs,
Figure 7: Case study on SVAMP.Problem Description Zero-shot CoT fails to generate the correct answer, but our DUP method can
make the correct prediction via deeply understanding the problems.
A grocery store had 30 bottles of regular soda, 8 bottles of diet soda and 41 apples.How many bottles did they
have total?
and Llama-2-Chat 70b (Zero-shot CoT ReasoningTouvron et al., 2023), compared with the baseline zero-shot CoT, our
the cases of open-source LLMs, our DUP can stillZero-shot SUC Reasoning we can also find that DUP reduces the calculation
CodeLlama-Instruct 13b and CodeLlama-InstructSure! Let's start by adding the number of regular soda bottles and diet soda bottles together. 30 (regular soda) DUP reduces the semantic misunderstanding ef
+ 8 (diet soda) = 38 bottles of soda. Now, let's add the number of apples to the total number of soda bottles. 38
Roziere et al.(soda) + 41 (apples) = 79 bottles total, 2023). As seen in Table 6, in fectively, indicating its effectiveness. Additionally,
outperform the baseline zero-shot CoT by a largeStage1. Core Question: How many bottles of soda and applesand step-missing error as well. One possible reason did the grocery store have in total?
margin on GSM8K and AddSub benchmarks. ThisStage2. Problem-Solving info: - 30 bottles of regular soda. - 8 bottles of diet soda. - is that learning more problem-solving information41 apples
**Stage3. Generate the Answer: To find the total number of bottles of soda and apples at the grocery store, we can**
also proves the universality of DUP.simply add the quantities of each item together. 30 + 8 = 38 bottles of soda. 38 bottles of soda + 41 apples = 79 can lead to more accurate reasoning steps.
total items. So, the grocery store had a total of 79 bottles of soda and apples.To have a close look, we present a case study
**More Accurate Core Questions and Problem-** on SVAMP, as shown in Figure 7. It can be seen
**solving Information Lead to Better Performance.** that the zero-shot CoT fails to generate the correct
As stated in Section 1, the core of our DUP is to answer, but with the help of our DUP, the LLMs
guide LLMs to deeply understand the problems, can better understand the problems and generate
_i.e., extracting the core question and key problem-_ an accurate answer. More case studies on different
solving information. To verify it, we conduct benchmarks can be found in Appendix A.2.
contrastive experiments on AQuA, GSM8K, and
SVAMP datasets. Specifically, using the GPT-3.5- **5** **Conclusion**
Turbo as the final responder, we leverage different
LLMs (i.e., LLaMA2-Chat-70B, GPT-3.5, GPT- In this work, we reveal that deeply understanding
4) to extract the core question in Stage 1 and the the whole problem is crucial for tackling complex
key problem-solving information in Stage 2, re- reasoning task. Consequently, we introduce the
spectively. The contrastive results are illustrated DUP prompting method to improve the LLMs’ reain Figure 6. As seen, when using the GPT-4 as soning abilities by encouraging them to deeply unthe extractor, GPT-3.5 responder can achieve better derstanding the problem. A series of experiments
performance than that using GPT-3.5 as the ex- on arithmetic, commonsense, and symbolic reasontractor. Conversely, using the LLaMA2-Chat-70B ing tasks prove that DUP prompting brings consisas the extractor leads to worse results. These re- tent and significant performance gains across all
sults demonstrate that better core questions and key benchmarks and LLMs. Additionally, DUP outperproblem-solving information can result in better forms the other zero-shot counterparts by a large
reasoning performance, confirming our statement. margin, and achieves new SOTA results in two pop
ular benchmarks, i.e., GSM8K and SVAMP. More
**Error Analysis.** Here, to verify whether DUP in-depth discussions and systematic analyses furindeed reduces the semantic misunderstanding, we ther reveal that when and where our DUP works
randomly sample 300 samples different reasoning well. Moreover, considering that fully understanddatasets, and perform error analysis for the ques- ing the whole problem maybe also be beneficial to
tions with incorrect answers. The detailed quantita- non-reasoning tasks, we will attempt to expand our
tive results are illustrated in Figure 5. As seen, method to more fields in the future work.
-----
**Limitations**
DUP prompting generally requires three visits to
LLMs, which indeed increases the inference costs.
Although we attempt to merge the three stages of
DUP as a single one, this approach would slightly
led to worse performance. We will further explore
how to reduce the inference costs without losing
any performance in the future work.
**Ethic Statements**
We take ethical considerations very seriously and
strictly adhere to the ACL Ethics Policy. This paper
aims to improve the LLMs’ reasoning abilities via
a novel prompting strategy. All used models (or
APIs) and datasets in this paper are publicly available and have been widely adopted by researchers.
All experimental results upon these open models
and datasets are reported accurately and objectively.
Thus, we believe that this research will not pose
any ethical issues.
**References**
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens
Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack
Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. 2020.
[Language models are few-shot learners. In NeurIPS.](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2023a. [Program of thoughts](https://openreview.net/pdf?id=YfZ4ZPt8zd)
[prompting: Disentangling computation from reason-](https://openreview.net/pdf?id=YfZ4ZPt8zd)
[ing for numerical reasoning tasks. Transactions on](https://openreview.net/pdf?id=YfZ4ZPt8zd)
_Machine Learning Research._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2023b. [Program of thoughts](https://openreview.net/forum?id=YfZ4ZPt8zd)
[prompting: Disentangling computation from reason-](https://openreview.net/forum?id=YfZ4ZPt8zd)
[ing for numerical reasoning tasks. TMLR.](https://openreview.net/forum?id=YfZ4ZPt8zd)
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul
Barham, Hyung Won Chung, Charles Sutton, Sebas[tian Gehrmann, et al. 2022. PaLM: Scaling language](https://arxiv.org/abs/2204.02311)
[modeling with pathways. arXiv preprint.](https://arxiv.org/abs/2204.02311)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](https://api.semanticscholar.org/CorpusID:239998651)
[lems. arXiv preprint.](https://api.semanticscholar.org/CorpusID:239998651)
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
[Kristina Toutanova. 2019. BERT: Pre-training of](https://aclanthology.org/N19-1423)
[deep bidirectional transformers for language under-](https://aclanthology.org/N19-1423)
[standing. In NAACL.](https://aclanthology.org/N19-1423)
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
[Tushar Khot. 2023. Complexity-based prompting for](https://openreview.net/forum?id=yf1icZHC-l9)
[multi-step reasoning. In ICLR.](https://openreview.net/forum?id=yf1icZHC-l9)
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Gra[ham Neubig. 2023. Pal: Program-aided language](https://arxiv.org/abs/2211.10435)
[models. arXiv preprint.](https://arxiv.org/abs/2211.10435)
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot,
[Dan Roth, and Jonathan Berant. 2021. Did aristotle](https://aclanthology.org/2021.tacl-1.21/)
[use a laptop? a question answering benchmark with](https://aclanthology.org/2021.tacl-1.21/)
[implicit reasoning strategies. TACL.](https://aclanthology.org/2021.tacl-1.21/)
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren
[Etzioni, and Nate Kushman. 2014. Learning to solve](https://doi.org/10.3115/v1/D14-1058)
[arithmetic word problems with verb categorization.](https://doi.org/10.3115/v1/D14-1058)
In EMNLP.
William J Hoyer, George W Rebok, and Susan Marx
Sved. 1979. Effects of varying irrelevant information
on adult age differences in problem solving. Journal
_of gerontology, 34(4):553–560._
Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://proceedings.neurips.cc/paper_files/paper/2022/file/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf)
[guage models are zero-shot reasoners. In NeurIPS.](https://proceedings.neurips.cc/paper_files/paper/2022/file/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf)
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish
Sabharwal, Oren Etzioni, and Siena Dumas Ang.
[2015. Parsing algebraic word problems into equa-](https://doi.org/10.1162/tacl_a_00160)
[tions. ACL.](https://doi.org/10.1162/tacl_a_00160)
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017. Program induction by rationale genera-](https://doi.org/10.18653/v1/P17-1015)
[tion: Learning to solve and explain algebraic word](https://doi.org/10.18653/v1/P17-1015)
[problems. In ACL.](https://doi.org/10.18653/v1/P17-1015)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wiz-](https://arxiv.org/abs/2308.09583)
[ardmath: Empowering mathematical reasoning for](https://arxiv.org/abs/2308.09583)
[large language models via reinforced evol-instruct.](https://arxiv.org/abs/2308.09583)
_arXiv preprint._
[OpenAI. 2023. Gpt-4 technical report.](https://arxiv.org/abs/2303.08774)
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida,
Carroll Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, John
Schulman, Jacob Hilton, Fraser Kelton, Luke Miller,
Maddie Simens, Amanda Askell, Peter Welinder,
Paul F Christiano, Jan Leike, and Ryan Lowe. 2022.
[Training language models to follow instructions with](https://proceedings.neurips.cc/paper_files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf)
[human feedback. In NeurIPS.](https://proceedings.neurips.cc/paper_files/paper/2022/file/b1efde53be364a73914f58805a001731-Paper-Conference.pdf)
Maria Chiara Pasolunghi, Cesare Cornoldi, and
Stephanie De Liberto. 1999. Working memory and
intrusions of irrelevant information in a group of specific poor problem solvers. Memory & Cognition,
27:779–790.
-----
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP models really able to solve simple](https://aclanthology.org/2021.naacl-main.168)
[math word problems? In NAACL.](https://aclanthology.org/2021.naacl-main.168)
Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie
Millican, Jordan Hoffmann, Francis Song, John
Aslanides, Sarah Henderson, Roman Ring, Susannah Young, et al. 2021. Scaling language models:
Methods, analysis & insights from training gopher.
_arXiv preprint._
[Subhro Roy and Dan Roth. 2015. Solving general arith-](https://doi.org/10.18653/v1/D15-1202)
[metic word problems. In EMNLP.](https://doi.org/10.18653/v1/D15-1202)
Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten
Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi Adi,
Jingyu Liu, Tal Remez, Jérémy Rapin, et al. 2023.
[Code llama: Open foundation models for code. arXiv](https://arxiv.org/abs/2308.12950)
_preprint._
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
[Jonathan Berant. 2019. Commonsenseqa: A question](https://aclanthology.org/N19-1421/)
[answering challenge targeting commonsense knowl-](https://aclanthology.org/N19-1421/)
[edge. In NAACL.](https://aclanthology.org/N19-1421/)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and
fine-tuned chat models. arXiv preprint.
Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu,
Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim.
[2023a. Plan-and-solve prompting: Improving zero-](https://doi.org/10.18653/v1/2023.acl-long.147)
[shot chain-of-thought reasoning by large language](https://doi.org/10.18653/v1/2023.acl-long.147)
[models. In ACL.](https://doi.org/10.18653/v1/2023.acl-long.147)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le,
Ed H. Chi, Sharan Narang, Aakanksha Chowdhery,
[and Denny Zhou. 2023b. Self-consistency improves](https://openreview.net/forum?id=1PL1NIMMrw)
[chain of thought reasoning in language models. In](https://openreview.net/forum?id=1PL1NIMMrw)
_ICLR._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022.
[Chain of thought prompting elicits reasoning in large](https://arxiv.org/abs/2201.11903)
[language models. In NeurIPS 2022.](https://arxiv.org/abs/2201.11903)
Xiaohan Xu, Chongyang Tao, Tao Shen, Can Xu,
Hongbo Xu, Guodong Long, and Jian-guang Lou.
[2023. Re-reading improves reasoning in language](https://arxiv.org/abs/2309.06275)
[models. arXiv preprint.](https://arxiv.org/abs/2309.06275)
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L. Griffiths, Yuan Cao, and Karthik
Narasimhan. 2023. Tree of thoughts: Deliberate
problem solving with large language models.
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo
[Li, Adrian Weller, and Weiyang Liu. 2023. Meta-](https://arxiv.org/abs/2309.12284)
[math: Bootstrap your own mathematical questions](https://arxiv.org/abs/2309.12284)
[for large language models. arXiv preprint.](https://arxiv.org/abs/2309.12284)
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and
[Jingren Zhou. 2023. Scaling relationship on learning](https://arxiv.org/abs/2308.01825)
[mathematical reasoning with large language models.](https://arxiv.org/abs/2308.01825)
_arXiv preprint._
Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew
[Chi-Chih Yao. 2023a. Cumulative reasoning with](https://arxiv.org/abs/2308.04371)
[large language models. arXiv preprint.](https://arxiv.org/abs/2308.04371)
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex
[Smola. 2023b. Automatic chain of thought prompt-](https://openreview.net/forum?id=5NTt8GFjUHkr)
[ing in large language models. In ICLR.](https://openreview.net/forum?id=5NTt8GFjUHkr)
Qihuang Zhong, Liang Ding, Yibing Zhan, Yu Qiao,
Yonggang Wen, Li Shen, Juhua Liu, Baosheng Yu,
Bo Du, Yixin Chen, et al. 2022. Toward efficient language model pretraining and downstream adaptation
via self-evolution: A case study on superglue. arXiv
_preprint._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H.
[Chi. 2023. Least-to-most prompting enables com-](https://openreview.net/forum?id=WZH7099tgfM)
[plex reasoning in large language models. In ICLR.](https://openreview.net/forum?id=WZH7099tgfM)
**A** **Appendix**
**A.1** **Prompt details.**
Here, we show the detailed prompts used in this
work, covering the prompts for inference, extracting answers and error analysis. Specifically, Table 8 shows the inference template for all reasoning tasks. Tables 9, 10 and 11 list the prompts
for extracting answers for Arithmetic Reasoning,
Commonsense Reasoning and Symbolic Reasoning
benchmarks, respectively. Moreover, the prompt
used to categorize the failure examples is shown in
Table 7.
**A.2** **More Case Studies**
To have a close look, we provide more case studies for each dataset in this part, i.e., AQuA (Table 12), GSM8K (Table 13), MultiArith (Table 14),
SVAMP (Table 15), AddSub (Table 16), SingleEq
(Table 17), CommonsenseQA (Table 18), StrategyQA (Table 19), and Coin Flip (Table 20). Specifically, taking the question of Table 12 as an example,
we present the outputs of three-stage processes of
our DUP method, respectively. The extracted core
question and key problem-solving information are
highlighted in blue and orange. The final answer
is highlighted in red. Please refer to the tables for
more details.
-----
Template
**Question: [Input Question].**
**Wrong Response: [Wrong Answer].**
**Correct Response: [Correct Answer].**
Please judge which type of error it belongs to based on the above information:
1. Semantic Misunderstanding: semantic misunderstanding or lack of commonsense concepts.
2. Calculation error: errors occurred while performing a basic operation.
3. Step-missing errors: missing step and hallucination.
Finally, please explain why this error falls into the category you select.
Table 7: Prompts for error analysis. The slot [Input Question] denotes the original input problem. The slots
[Wrong Question] and [Correct Question] denote the incorrect text generated by the LLMs and the original label.
No. Template Reasoning tasks
**Extract core question: Please extract core question, only the most**
comprehensive and detailed’s one!
**Extract problem-solving information : Please extract the most**
useful information related to the core question( [Core Question]),
Only extract the most useful information, and list them one by one!
**Generate the answer: Hint: [Problem-solving Info], \n[Core Question].**
\n Please understand core question and problem-solving information,
then solve thequestion step by step and show the answer.
GSM8K, AddSub,
SVAMP, MultiArith,
SingleEq, AQuA,
CSQA, StrategyQA,
Coin Flip
**Prompt: Please accurately understand the question useful information**
2 Last Letter
and solve the question step by step.
Table 8: Reasoning prompt templates for all reasoning tasks. Notably, [Core Question] indicates the extracted
core question, and [Problem-solving Info] indicates the extracted problem-solving information to the problem.
No. Template Arithmetic Reasoning
Here is a math question and a model’s answer about this question. Please extract the
EXACT number from the answer text as the final answer for question.
QUESTION: {}. \nANSWER: {}
Final format should be a legal ’number’ without any suffix such as ’$’.
The final answer is:
Here is a math question and a model’s answer about this question. Please extract the
EXACT choice from the answer text as the final answer for question.
QUESTION: {}. \nANSWER: {}
Final format should be a legal ’options’,If you can’t find the right choice, just answer
Z. The final answer is:
GSM8K, AddSub,
SVAMP, MultiArith,
SingleEq
AQUA
Table 9: Prompts for extracting answers with GPT-3.5-turbo on Arithmetic Reasoning.
-----
No. Template Commonsense Reasoning
Here is a Commonsense question and a model’s answer about this question.
Please extract the EXACT one choice from the answer text as the
final answer for question.
QUESTION: {}. \nANSWER: {}
Final format should be a legal ’choice’(eg. (A) or (b)),If you can’t find the
correct choice, just answer the one that is closest to the answer.
The final answer is:
Here is a Commonsense question and a model’s answer about this question.
Please extract theEXACT one choice from the answer text as the final
answer for question.
QUESTION: {}. \nANSWER: {}
Final format should be a legal ’string’(Yes or No), If you Uncertain or unknow,
Please understand that the question and answer information outputs the closest
answer,you can only output Yes or No.
The final answer is:
CommonsenseQA
StrategyQA
Table 10: Prompts for extracting answers with GPT-3.5-turbo on Commonsense Reasoning.
No. Template Symbolic Reasoning
Here is a question related to concatenating last letters, along with a model’s
answer to this question. Please extract the EXACT string from the answer
text to determine the final answer to the question.
QUESTION: {}. \nANSWER: {}
Note: Please extract the string after (the answer is:), or the string that appears at
the end of the sentence.
The final answer is:
Here is a Coin flip related question and a model’s answer about this question.
Please extract the EXACT string from the answer text as the
final answer for question.
QUESTION: {}. \nANSWER: {}
Note: If you find that the coin is still heads up, the answer is "yes". If you find
that the coin is not still heads up or tails up. the answer is "no".
The final answer is:
Last letters
Coin Flip
Table 11: Prompts for extracting answers with GPT-3.5-turbo on Symbolic Reasoning.
-----
**Q: A number is as much greater than 36 as is less than 86. Find the Number. Answer Choices: A)60**
B)56 C)51 D)61 E)41
**Stage1: Reveal the core question.**
**A: Please extract core question, only the most comprehensive and detailed one!**
**Output: A number is as much greater than 36 as is less than 86. Find the number.**
**Stage2: Extract the problem-solving information.**
**A: Note: Please extract the most useful information related to the core question (A number is as**
**much greater than 36 as is less than 86. Find the number), only extract the most useful information,**
and list them one by one!
**Output: 1. The number is as much greater than 36 as it is less than 86.**
**Stage3: Generate and extract the answers.**
**A: Hint: 1. The number is as much greater than 36 as it is less than 86.**
A number is as much greater than 36 as is less than 86. Find the number.
Please understand the Hint and question information,then solve the question step by step and show
the answer.
**Output: The number that is as much greater than 36 as it is less than 86 can be found by calculating**
the average of 36 and 86. Average = (36 + 86) / 2 Average = 61. So, the number is 61, which is option
D.
Table 12: Example outputs by Zero-shot DUP for AQuA. Notably, the extracted core question and key problemsolving information are highlighted in blue and orange. The final answer is highlighted in red.
**Q: Morisette and Kael were asked to bring fruits. Morisette brought 5 apples and 8 oranges, while**
Kael brought twice the amount of apples and half the number of oranges than Morisette. How many
fruits do they have in total?
**Stage1: Extract the core question.**
**A: Please extract core question, only the most comprehensive and detailed one!**
**Output: How many fruits do Morisette and Kael have in total?**
**Stage2: Extract the problem-solving information.**
**A: Please extract the most useful information related to the core question (How many fruits do**
Morisette and Kael have in total?), only extract the most useful information, and list them one by one!
**Output: -Morisette brought 5 apples and 8 oranges.\n- Kael brought twice the amount of apples and**
half the number of oranges than Morisette.
**Stage3: Generate and extract the answers.**
**A: -Morisette brought 5 apples and 8 oranges.\n- Kael brought twice the amount of apples and half the**
number of oranges than Morisette.\nHow many fruits do Morisette and Kael have in total? \nPlease
understand the Hint and question information,then solve the question step by step and show the
answer.
**Output: Morisette brought 5 apples + 8 oranges = 13 fruits.\nKael brought 2 * 5 apples = 10**
|apples.\nKael also brought 1/2 * 8 oranges = 4 oranges.\n So, Kael oranges = 14 fruits.\nTherefore, Morisette and Kael have 13 fruits +|brought a 14 fruits|total of 10 = 27 fruits|apples + 4 in total.|
|---|---|---|---|
Table 13: Example outputs by Zero-shot DUP for GSM8K. Notably, the extracted core question and key problemsolving information are highlighted in blue and orange. The final answer is highlighted in red.
-----
**Q: Each chocolate bar in a box cost $3. If a box had 7 bars total and Olivia sold all but 4 bars, how**
much money would she have made?
**Stage1: Extract the core question.**
**A:Please extract core question, only the most comprehensive and detailed one!**
**Output:How much money would Olivia have made if she sold all but 4 bars from a box of 7 chocolate**
bars, where each bar costs $3?
**Stage2: Extract the problem-solving information.**
**A: Please extract the most useful information related to the core question (How much money would**
Olivia have made if she sold all but 4 bars from a box of 7 chocolate bars, where each bar costs $3?),
only extract the most useful information, and list them one by one!
**Output:1. Each chocolate bar costs $3. 2. There are 7 bars in a box. 3. Olivia sold all but 4 bars.**
**Stage3: Generate and extract the answers.**
**A: Hint: 1. Each chocolate bar costs $3. 2. There are 7 bars in a box. 3. Olivia sold all but 4 bars.**
How much money would Olivia have made if she sold all but 4 bars from a box of 7 chocolate bars,
where each bar costs $3?
Please understand the Hint and question information,then solve the question step by step and show
the answer.
|Output: Olivia sold all but 4 bars from a box of 7 chocolate bars, where each bar costs $3. So, sold 7 - 4 = 3 bars. The total amount of money Olivia would have made is 3 bars * $3 per bar|Olivia = $9.|
|---|---|
Table 14: Example outputs by Zero-shot DUP for MultiArith. Notably, the extracted core question and key
problem-solving information are highlighted in blue and orange. The final answer is highlighted in red.
**Q: Baker made 144 cakes. He sold 71 of them. Then he made 111 more cakes.How many more cakes**
did baker make than those he sold?
**Stage1: Extract the core question.**
**A: Please extract core question, only the most comprehensive and detailed one!**
**Output: How many more cakes did the baker make than those he sold?**
**Stage2: Extract the problem-solving information.**
**A: Please extract the most useful information related to the core question (How many problems did**
Paige have to do for homework after she finished 44 of the problems at school, if she initially had 43
math problems and 12 science problems?), only extract the most useful information, and list them one
by one!
**Output: - Baker made 144 cakes.\n- He sold 71 of them.\n- He made 111 more cakes.**
**Stage3: Generate and extract the answers.**
**A: Hint: - Baker made 144 cakes.\n- He sold 71 of them.\n- He made 111 more cakes. \nHow**
many more cakes did the baker make than those he sold? \nPlease understand the Hint and question
information,then solve the question step by step and show the answer.
**Output: The baker initially made 144 cakes and then made an additional 111 cakes, so in total, he**
made 144 + 111 = 255 cakes. He sold 71 cakes. Therefore, the baker made 255 - 71 = 184 more cakes
|than|he|sold.|
|---|---|---|
Table 15: Example outputs by Zero-shot DUP for SVAMP. Notably, the extracted core question and key problemsolving information are highlighted in blue and orange. The final answer is highlighted in red.
-----
**Q: Craig walked 0.2 mile from school to David ’s house and 0.7 mile from David ’s house to his own**
house . How many miles did Craig walk in all?
**Stage1: Extract the core question.**
**A: Please extract core question, only the most comprehensive and detailed one!**
**Output: How many miles did Craig walk in all?**
**Stage2: Extract the problem-solving information.**
**A: Please extract the most useful information related to the core question (How many miles did Craig**
walk in all?), only extract the most useful information, and list them one by one!
**Output: - Craig walked 0.2 mile from school to David ’s house.\n- Craig walked 0.7 mile from**
David ’s house to his own house.
**Stage3: Generate and extract the answers.**
**A: Hint: - Craig walked 0.2 mile from school to David ’s house.\n- Craig walked 0.7 mile from David**
’s house to his own house. \nHow many miles did Craig walk in all? \nPlease understand the Hint and
question information,then solve the question step by step and show the answer.
|Output: Crai|g walked|a|total o|f 0.2|mile|+ 0.7|mile =|0.9|mile.|
|---|---|---|---|---|---|---|---|---|---|
Table 16: Example outputs by Zero-shot DUP for AddSub. Notably, the extracted core question and key
problem-solving information are highlighted in blue and orange. The final answer is highlighted in red.
**Q: Dana earns $13 per hour. She worked 9 hours on Friday, 10 hours on Saturday, and 3 hours on**
Sunday. How much money did Dana earn in all?
**Stage1: Extract the core question.**
**A: Please extract core question, only the most comprehensive and detailed one!**
**Output: How much money did Dana earn in total if she earns $13 per hour and worked 9 hours on**
Friday, 10 hours on Saturday, and 3 hours on Sunday?
**Stage2: Extract the problem-solving information.**
**A: Please extract the most useful information related to the core question (How much money did**
Dana earn in total if she earns $13 per hour and worked 9 hours on Friday, 10 hours on Saturday, and
3 hours on Sunday?), only extract the most useful information, and list them one by one!
**Output: 1. Dana earns $13 per hour.\n2. She worked 9 hours on Friday.\n3. She worked 10 hours on**
Saturday.\n4. She worked 3 hours on Sunday.
**Stage3: Generate and extract the answers.**
**A: Hint: 1. Dana earns $13 per hour.\n2. She worked 9 hours on Friday.\n3. She worked 10 hours**
on Saturday.\n4. She worked 3 hours on Sunday. \nHow much money did Dana earn in total if she
earns $13 per hour and worked 9 hours on Friday, 10 hours on Saturday, and 3 hours on Sunday?
\nPlease understand the Hint and question information,then solve the question step by step and show
the answer.
**Output: Dana worked a total of 9 hours on Friday + 10 hours on Saturday + 3 hours on Sunday = 22**
hours. Since Dana earns $13 per hour, for the 22 hours she worked, she earned 22 hours * $13/hour =
286.
Table 17: Example outputs by Zero-shot DUP for SingleEq. Notably, the extracted core question and key
problem-solving information are highlighted in blue and orange. The final answer is highlighted in red.
-----
**Q: The artist was sitting quietly pondering, then suddenly he began to paint when what struck him?**
Answer Choices: (A)sadness (B)anxiety (C)inspiration (D)discomfort (E)insights
**Stage1: Extract the core question.**
**A: Please extract core question, only the most comprehensive and detailed one!**
**Output: What struck the artist and prompted him to start painting?**
**Stage2: Extract the problem-solving information.**
**A: Please extract the most useful information related to the core question (What struck the artist and**
prompted him to start painting?), only extract the most useful information, and list them one by one!
**Output: - The artist was sitting quietly pondering\n- Suddenly, he began to paint\n- Something struck**
him and prompted him to start painting
**Stage2: Generate and extract the answers.**
**A: Hint: - The artist was sitting quietly pondering\n- Suddenly, he began to paint\n- Something struck**
him and prompted him to start painting \n3. She worked 10 hours on Saturday.\n4. She worked 3
hours on Sunday. \nWhat struck the artist and prompted him to start painting? \nPlease understand
the Hint and question information,then solve the question step by step and show the answer.
**Output: Based on the information provided, the answer is (C) inspiration. The artist was sitting**
|quietly that he express|ponder was in himse|ing, and suddenly something struck him, prompting him to start painting. This suggests spired by something, perhaps an idea, a vision, or a feeling, which motivated him to lf through painting.|
|---|---|---|
Table 18: Example outputs by Zero-shot DUP for CommonsenseQA. Notably, the extracted core question and
key problem-solving information are highlighted in blue and orange. The final answer is highlighted in red.
**Q: Can the original name of the zucchini be typed on the top row of a QWERTY keyboard?**
**Stage1: Extract the core question.**
**A: Please extract core question, only the most comprehensive and detailed one!**
**Output: Can the original name of the zucchini be typed on the top row of a QWERTY keyboard?**
**Stage2: Extract the problem-solving information.**
**A: Please extract the most useful information related to the core question (Can the original name**
of the zucchini be typed on the top row of a QWERTY keyboard?), only extract the most useful
information, and list them one by one!
**Output: 1. The original name of the zucchini is “Cucurbita pepo.” \n2. The top row of a QWERTY**
keyboard contains the letters “QWERTYUIOP.” \n3. The original name of the zucchini cannot be
typed using only the top row of a QWERTY keyboard.
**Stage3: Generate and extract the answers.**
**A: Hint: 1. The original name of the zucchini is “Cucurbita pepo.” \n2. The top row of a QWERTY**
keyboard contains the letters “QWERTYUIOP.” \n3. The original name of the zucchini cannot be
typed using only the top row of a QWERTY keyboard. \nCan the original name of the zucchini
be typed on the top row of a QWERTY keyboard? \nPlease understand the Hint and question
information,then solve the question step by step and show the answer.
|Output: No, t QWERTY ke and none of th to type the ori|he original yboard. Th ese letters ginal name|name of the zucchini “Cucurbita pepo” cannot be typed on the top row of a e top row of a QWERTY keyboard contains the letters “QWERTYUIOP” are present in the original name of the zucchini. Therefore, it is not possible of the zucchini using only the top row of a QWERTY keyboard.|
|---|---|---|
Table 19: Example outputs by Zero-shot DUP for StrategyQA. Notably, the extracted core question and key
problem-solving information are highlighted in blue and orange. The final answer is highlighted in red.
-----
**Q: A coin is heads up. Walter does not flip the coin. Mikayla does not flip the coin. Larry does not flip**
the coin. Ryan does not flip the coin. Is the coin still heads up? Note that "flip" here means "reverse".
**Stage1: Extract the core question.**
**A: Please extract core question, only the most comprehensive and detailed one!**
**Output: Is the coin still heads up if no one flips it?**
**Stage2: Extract the problem-solving information.**
**A: Please extract the most useful information related to the core question (Is the coin still heads up if**
no one flips it?), only extract the most useful information, and list them one by one!
**Output: 1. The initial position of the coin is heads up.\n2. Walter does not flip the coin.\n3. Mikayla**
does not flip the coin.\n4. Larry does not flip the coin.\n5. Ryan does not flip the coin.
**Stage3: Generate and extract the answers.**
**A: Hint: 1. The initial position of the coin is heads up.\n2. Walter does not flip the coin.\n3. Mikayla**
does not flip the coin.\n4. Larry does not flip the coin.\n5. Ryan does not flip the coin. \nIs the coin
still heads up if no one flips it? \nPlease understand the Hint and question information,then solve the
question step by step and show the answer.
**Output: Based on the given information, we can conclude the following: \n1. The initial position**
of the coin is heads up.\n2. Walter does not flip the coin. Therefore, the coin remains heads up.\n3.
Mikayla does not flip the coin. Therefore, the coin remains heads up.\n4. Larry does not flip the coin.
Therefore, the coin remains heads up.\n5. Ryan does not flip the coin. Therefore, the coin remains
|heads|up.Since|no|one|flips|the|coin,|the|coin|remains|heads|up.|
|---|---|---|---|---|---|---|---|---|---|---|---|
Table 20: Example outputs by Zero-shot DUP for Coin Flip. Notably, the extracted core question and key
problem-solving information are highlighted in blue and orange. The final answer is highlighted in red.
-----
| [
"Qihuang, Zhong",
"Kang, Wang",
"Liang, Ding",
"Ziyang, Xu",
"Dacheng, Tao",
"Juhua, Liu",
"Bo, Du"
] | 2024-05-29T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2404.14963 | https://arxiv.org/abs/2404.14963 | null |
Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation | Inference-time computation is a powerful paradigm to enhance the performance of large language models (LLMs), with Best-of-N sampling being a widely used technique. However, this method is computationally expensive, requiring both (1) an external reward model and (2) the generation of multiple samples. In this work, we introduce a new generative self-evaluation scheme designed to adaptively reduce the number of generated samples while maintaining or even improving performance. We use a generative reward model formulation, allowing the LLM to predict mid-generation the probability that restarting the generation will yield a better response. These predictions are obtained without an external reward model and can be used to decide whether or not to generate more samples, prune unpromising samples early on, or to pick the best sample. This capability is very inexpensive as it involves generating a single predefined token. Trained using a dataset constructed with real unfiltered LMSYS user prompts, Llama 3.1 8B's win rate against GPT-4 on AlpacaEval increases from 21% to 34% with 16 samples and math performance on GSM8K improves from 84% to 91%. By sampling only when the LLM determines that it is beneficial to do so and adaptively adjusting temperature annealing, we demonstrate that 74% of the improvement from using 16 samples can be achieved with only 1.2 samples on average. We further demonstrate that 50-75% of samples can be pruned early in generation with minimal degradation in performance. Overall, our methods enable more efficient and scalable compute utilization during inference for LLMs. | A new generative self-evaluation scheme designed to adaptively reduce the number of generated samples while maintaining or even improving performance and enabling more efficient and scalable compute utilization during inference for LLMs is introduced. | [
"Rohin, Manvi",
"Anikait, Singh",
"Stefano, Ermon"
] | 2024-10-03T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2410.02725v1 | https://arxiv.org/abs/2410.02725 | https://www.semanticscholar.org/paper/62d4fcfb2f6717d4eb562f4e51f37004fc5d7109 |
|
Advancing Process Verification for Large Language Models via Tree-Based Preference Learning | Large Language Models (LLMs) have demonstrated remarkable potential in handling complex reasoning tasks by generating step-by-step rationales.Some methods have proven effective in boosting accuracy by introducing extra verifiers to assess these paths. However, existing verifiers, typically trained on binary-labeled reasoning paths, fail to fully utilize the relative merits of intermediate steps, thereby limiting the effectiveness of the feedback provided. To overcome this limitation, we propose Tree-based Preference Learning Verifier (Tree-PLV), a novel approach that constructs reasoning trees via a best-first search algorithm and collects step-level paired data for preference training. Compared to traditional binary classification, step-level preferences more finely capture the nuances between reasoning steps, allowing for a more precise evaluation of the complete reasoning path. We empirically evaluate Tree-PLV across a range of arithmetic and commonsense reasoning tasks, where it significantly outperforms existing benchmarks. For instance, Tree-PLV achieved substantial performance gains over the Mistral-7B self-consistency baseline on GSM8K (67.55% to 82.79%), MATH (17.00% to 26.80%), CSQA (68.14% to 72.97%), and StrategyQA (82.86% to 83.25%).Additionally, our study explores the appropriate granularity for applying preference learning, revealing that step-level guidance provides feedback that better aligns with the evaluation of the reasoning process. | This study proposes Tree-based Preference Learning Verifier (Tree-PLV), a novel approach that constructs reasoning trees via a best-first search algorithm and collects step-level paired data for preference training and empirically evaluates it across a range of arithmetic and commonsense reasoning tasks, where it significantly outperforms existing benchmarks. | [
"Wenqi, Zhang",
"Mingqian, He",
"Yongliang, Shen",
"Zeqi, Tan",
"Weiming, Lu"
] | 2024-06-29T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2407.00390v1 | https://arxiv.org/abs/2407.00390 | https://www.semanticscholar.org/paper/4ccff1f5b660ac253703dfe1eb053e7a57d4b06f |
|
An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning | Large language models (LLMs) are displaying emergent abilities for math reasoning tasks,and there is a growing attention on enhancing the ability of open-source LLMs through supervised fine-tuning (SFT).In this paper, we aim to explore a general data strategy for supervised data to help optimize and expand math reasoning ability.Firstly, we determine the ability boundary of reasoning paths augmentation by identifying these paths' minimal optimal set.Secondly, we validate that different abilities of the model can be cumulatively enhanced by Mix of Minimal Optimal Sets of corresponding types of data, while our models MMOS achieve SOTA performance on series base models under much lower construction costs.Besides, we point out GSM-HARD is not really hard and today's LLMs no longer lack numerical robustness.Also, we provide an Auto Problem Generator for robustness testing and educational applications.Our code and data are publicly available at https://github.com/cyzhh/MMOS. | null | ## An Empirical Study of Data Ability Boundary in LLMs’ Math Reasoning
**Zui Chen[3][1,2], Yezeng Chen[3][1,2], Jiaqi Han[1,2], Zhijie Huang[1,2], Ji Qi, Yi Zhou[♣][3]**
1School of Information Science and Technology, ShanghaiTech University
2Shanghai Innovation Center for Processor Technologies
3School of Information Science and Technology, University of Science and Technology of China
{chenzui2022, chenyz2022, hanjq2022, huangzhj1}@shanghaitech.edu.cn;
[email protected]; [email protected]
**Abstract**
Large language models (LLMs) are displaying emergent abilities for math reasoning tasks,
and there is a growing attention on enhancing
the ability of open-source LLMs through supervised fine-tuning (SFT). In this paper, we
aim to explore a general data strategy for supervised data to help optimize and expand
math reasoning ability. Firstly, we determine
the ability boundary of reasoning paths augmentation by identifying these paths’ minimal
optimal set. Secondly, we validate that different abilities of the model can be cumulatively enhanced by Mix of Minimal Optimal
**Sets of corresponding types of data, while our**
models MMOS achieve SOTA performance
on series base models under much lower construction costs. Besides, we point out GSMHARD is not really hard and today’s LLMs
no longer lack numerical robustness. Also,
we provide an Auto Problem Generator for robustness testing and educational applications.
Our code and data are publicly available at
[https://github.com/cyzhh/MMOS.](https://github.com/cyzhh/MMOS)
Figure 1: Conceptual figure of the ability boundary
powerful base models with prompts composed of
their designed reasoning chains to create supervised data for SFT based on several public seed
datasets (Lu et al., 2022).
In this paper, we aim to explore a general data
strategy for supervised data to help optimize and
expand math reasoning ability. We primarily investigate the following research questions (RQs):
- RQ1: What is the ability boundary of reasoning paths, and how to select paths optimally?
- RQ2: How can we expand the ability boundary, and what kinds of problem sets are needed
for this expansion?
**1** **Introduction**
In the context of significant emergent abilities
demonstrated by Large Language Models (LLMs)
(Wei et al., 2022a; OpenAI, 2023), the focus on
math reasoning tasks, particularly Numerical QA
and Math Word Problems (MWP) (Kushman et al.,
2014; Upadhyay and Chang, 2017; Miao et al.,
2020a; Xu et al., 2022), is paramount. The current approach to activate these abilities in LLMs
involves carefully engineered prompting (Brown
et al., 2020), in-context learning (ICL) (Chen et al.,
2022b) or supervised fine-tuning (SFT).
Particularly due to computational costs and stability concerns (Yuan et al., 2023), there is growing
attention on enhancing the abilities of open-source
LLMs (Rozière et al., 2023) through SFT. Supervised data is crucial for SFT. Current research is
centered on using GPT-4 (OpenAI, 2023) or other
RQ1 originates from a common challenge in response augmentation methods: determining the optimal amount of data the training set should cover
to balance data amount, effectiveness, and generalizability. As for RQ2, we focus on introducing
additional problems instead of synthesizing new
questions for query augmentation, which could assist in selecting and combining the necessary data
from the chaotic reality of existing datasets. Actu
-----
ally, we first explore methods to enhance weak ability, and then focus specifically on Out-Of-Domain
(OOD) ability, numerical robustness, and further
extending the model’s existing ability.
The overall data strategy is illustrated in Figure 1.
Based on the initial set obtained from n-sampling,
we determine the ability boundary of reasoning
paths augmentation and then achieve optimization
by identifying the Minimal Optimal Set (MOS) for
individual datasets through deduplication. Furthermore, we facilitate expansion by creating Mix of
**Minimal Optimal Sets (MMOS).**
The findings for RQ1 (1,2) and RQ2 (3,4,5) include the following points:
1. Providing varied, deduplicated and correct
reasoning paths can improve math reasoning ability
in In-Domain and Similar-Domain data. (Sec 2.3)
2. The ability boundary of increasing reasoning
paths is reached, that is, we identify the minimal
optimal set, when the number of paths is similar to
the number of distinct problem solutions. (Sec 2.4)
3. Different abilities of the model can be cumulatively enhanced by mixing minimal optimal sets
of corresponding types of data. (Sec 3.2)
4. GSM-HARD is not really hard and the numerical robustness issue is no longer prevalent in
today’s LLMs. We also build a high-quality Auto
Problem Generator for these numerical robustness
tests and educational applications. (Sec 3.3 & 3.4)
5. An overlapping dataset can continue to enhance the model’s ability in the absence of corresponding data. And MMOS which has much lower
construction costs can also achieve SOTA performance on series base models. (Sec 3.5 & 3.6)
**2** **Ability Boundary of Reasoning Paths**
**2.1** **Overview**
In this section, for RQ1, we aim to determine the
ability boundary of reasoning paths and find a data
strategy. We hypothesize that a minimal set capable
of maximizing math reasoning ability consists of
varied, deduplicated and correct reasoning paths.
In following Section 2.2, we discuss about the
datasets. In Section 2.3, we identify this minimal
optimal set and determine the benefits of removing duplicates and keeping varied reasoning paths
within a certain range. In Section 2.4, we employ a
clustering method as a filter to further explore the
boundary. In Section 2.5, we conduct an ablation
experiment to assess the impact of ensuring the
correctness of the reasoning paths.
All detailed experiment settings are in C.
**2.2** **Dataset Comparation**
Six datasets are involved in this study. Detailed
information about their origins, example analyses,
and a preliminary estimation of their difficulty levels can be found in the Appendix A.
Figure 2: Visualization of query embedding distribution
through t-SNE across six distinct datasets.
To better understand the problems’ difference
across these datasets, we visualize the hidden representations of problems using t-SNE. This visualization as Figure 2 reveals a notable separation in
the distribution of problems from the GSM8K and
MATH datasets into two distinct clusters. This divergence emphasizes the contrast in question styles:
GSM8K being text-intensive, while MATH is more
focused on math expressions.
For the experiments presented in this section, we
exclusively use GSM8K without bootstrapping its
questions. Consequently, GSM8K is categorized
as our IND data. Conversely, the MATH dataset,
with its significant stylistic and content differences,
is classified as OOD data. Additionally, two other
datasets, SVAMP and ASDiV, although different in
origin from GSM8K, show similarities in both question types and spatial representations. Therefore,
we consider these to be Similar-Domain Datasets.
And we denote SVAMP and ASDiV as S&A in the
subsequent analysis.
**2.3** **Identify the Minimal Optimal Set**
To identify the minimal optimal set, we follow
these steps: 1) Sample a sufficient number of correct reasoning paths to form initial set. 2) Implement a deduplication algorithm to obtain its deduplicated subset. 3) Conduct a statistical analysis on
-----
Figure 3: Comparison of test set accuracy on GSM8K, S&A and MATH for models after SFT on Code LLaMA 7B
using series subsets of Du[k]400 [and][ E]u[k]400 [with different data amount.]
**Algorithm 1 Deduplicate Data by Codes**
**Require: data d, extract ξ(·), recovery** _ξ(·), ast-_
parse P(·), astunparse P(·), deduplicate D(·)
[e]
1: for i = 1 to n do [e] [e]
2: _ci_ _ξ(di_ _qi_ _ai_ _si) ▷_ Code Extraction
_←_ _|_ _⊕_ _⊕_
3: _ti′ ∼_ P(ci) _▷_ Code Astparse
4: _ti_ _▷_ Code Substitution
_′_ _[←]_ _[π][(][t]′[i][|][v][ ⊕]_ _[f]_ [)]
5:6: end forci _[∼]_ P[e](ti[)] _▷_ Code Astunparse
_′_ _′_
7: c D(c ) _▷_ Code Deduplication
_′ ←_ _′_
8: d _ξ(c_ _q_ _a_ _s)_ _▷_ Data Recovery
_←_ e _⊕_ _⊕_ _⊕_
4) Convert the normalized tree back into normal_′_
ized code, denoted as ci[.]
After completing the iteration, the normalized
codes are duplicated through plain text matching.
_′_
Finally deduplicated data d is recovered with the
deduplicated code, query, completion and source.
**The k-N relation can be regarded as an esti-**
mation of the relationship between the number of
reasoning paths per question k and the corresponding subset data amount N. This relation is obtained
by implementing an upper limit on the reasoning
paths per question in the initial set.
As shown in Appendix B, the k-N curve demonstrates a linear relationship on Eu400 with a median
of k = 400 and a mean of k = 392.14. In contrast,
on Du400, it exhibits a log-linear relationship with
the upper limit of reasoning paths per question k
with the subset data amount N. 4) Perform SFT on
several subsets to analyze the impact of removing
duplicates and keeping varied reasoning paths.
**Initial set created by various original methods**
face API and learning costs, and there is a scarcity
of training data being open-sourced. Therefore, we
attempt to directly use open-source models. Specifically, we opt for advanced models ToRA (Gou
et al., 2023) that combine programs and rationales,
and apply rejection sampling (Yuan et al., 2023) to
build initial set. And this method, resembling selflearning, possesses a certain degree of universality.
We employ four pre-trained models: ToRACODE 7B/13B/34B and ToRA 70B. For every
question in the GSM8K dataset, these models sample 100 reasoning paths each with temperature 0.9.
We then merge 400 reasoning paths and extract
those whose code can be executed and have correct
answers to obtain the initial training set Eu400.
**Deduplication Algorithm 1 aim to extract the**
deduplicated subset Du400 from Eu400 by codes
which share the same calculation process.
We iterate all n data with following steps:
1) Extract the code block ci from data di, which
includes query qi, completion ai and source si .
2) Employ the Abstract Syntax Tree (AST)
method to parse the code into the tree ti.
3) Normalize the tree by replacing variable
names v with lowercase letters and function names
_f with uppercase letters, resulting in t[′]i._
-----
|Dcluster,k u400|k|5 7 9 15 27 -|
|---|---|---|
||GSM8K S&A|71.4(+0.7) 70.9(-0.7) 72.6(-1.2) 73.4(+0.8) 74(-0.1) - 73.4(+0.5) 73.4(-0.9) 73.1(-0.9) 74.2(+0.6) 73.4(+0.0) -|
|Ecluster,k u400|k|2 4 8 12 24 36|
||GSM8K S&A|67.6(+0.6) 70.5(+0.6) 72.1(+0.5) 74.0(+2.3) 73.2(+0.0) 73.5(+0.8) 72.0(+0.3) 71.8(-1.1) 74.4(+2.0) 72.3(+0.2) 73.0(+2.0) 73.3(+1.4)|
Table 1: Comparison of test set accuracy on GSM8K and S&A for models after SFT on Code LLaMA 7B using
series subsets of Du[k]400 [and][ E]u[k]400 [through clustering.]
**Dataset** **k** **N** **GSM8K** **S&A**
_Du[k]400_ 9 44771 71.4 73.6
_Du[total,k]400_ 9 46740 69.8(-1.6) 73.7(+0.1)
_Du[k]400_ 89530 74.2 73.3
_∞_
_Du[total,k]400_ 126391 71.7(-2.5) 73.0(-0.3)
_∞_
Table 2: Comparison of test set accuracy on GSM8K
and S&A for models after SFT on Code LLaMA 7B
using Du400 and Du[total]400[.]
a median of 7 and a mean of 12.01. This indicates
that the deduplication method is effective but still
leaves room for improvement.
**Comparative experiment includes two aspects.**
Firstly, to verify the effectiveness of adding varied
paths, we conduct random selection of k paths for
each question within Du400 to obtain twelve Du[k]400
subsets with k ∈ {1,2,3,5,7,9,12,15,20,27,40,∞},
N ∈ {7.5,15,20,30,38,45,53,60,67,75,82,90}K.
Secondly, to better assess the impact of duplicate removal, we maintain a consistent order of
magnitude in terms of data amount on Eu400 and
obtain Eu[k]400 [with k][∈][{1,2,4,8,12,24,36,48} and]
N∈{7.5,15,30,60,90,180,270,360}K.
**Evaluation & Conclusion. We conduct SFT on**
Code LLaMA 7B using a series of subsets Du[k]400
and Eu[k]400[, and then inference on the test split of]
GSM8K, S&A, and MATH.
Results are shown in Figure 3. On the IND
dataset GSM8K, as indicated by the blue solid line,
the model’s ability maintains a linear relationship
with the logarithm of data amount before k = 9,
_N = 45K. In contrast, the blue dashed line rep-_
resenting the initial set data aligns with this trend
only when k is small and duplicate paths are less
likely to be selected. Beyond this point, further
increasing the data amount sharply diminishes the
marginal improvement in model ability. This suggests that enhancing the model’s ability stems from
adding varied reasoning paths, rather than merely
increasing the data amount.
We also observe that with the same data amount,
beyond N = 30K, the performance on Du400 consistently surpasses that on Eu400. This reflects that
removing duplicates can not only diminish the training duration but also enhance the model’s ability.
On the Similar-Domain Datasets S&A, potentially due to the inherently easier nature of the questions, the models achieve high effectiveness even
at k=1. The other conclusions are similar to those
observed on GSM8K.
However, on the OOD dataset MATH, the models consistently exhibit weaker ability. This may
be, as shown in Section 2.2, due to the differing
types of questions presented in the dataset.
Thus far, we have essentially reached the conclusion that providing varied, deduplicated, and
correct reasoning paths can improve math reasoning ability in both IND and Similar-Domain data.
Finally, we conduct a case study, as shown in
Appendix D, where our example problem has 10
different solutions which is similar to the previously inflection point of k=9. Therefore, we consider Du[k]400[=9] [as the minimal optimal set. From this,]
we draw another conclusion: the ability boundary
is reached, that is, we identify the minimal optimal
set, when the number of reasoning paths is similar
to the number of potential problem solutions.
**2.4** **Cluster as a Filter**
Our deduplication algorithm, as an extension of
a template method, is not flawless and can fail
to eliminate similar paths. The example problem
shown in Appendix D has only 10 distinct solutions. However, in Du400, 43 paths are still retained. When we implement random selection to
obtain Du[k]400[=9] [, it only includes 6 distinct solutions.]
We attempt to use clustering as a filter, replacing
random selection, in order to ensure that the resulting Du[k]400[=9] [subset contains a greater number of]
distinct solutions. Specifically, we first obtain the
embedding vectors of the codes. Then, we apply
Latent Semantic Analysis (LSA) for dimensionality reduction, followed by k-means clustering. We
extract and retain the central data points from these
clusters. On the same example problem, the new
_Du[k]400[=9]_ [contains 7 distinct solutions.]
-----
Figure 4: Comparison of test set accuracy on GSM8K, S&A and MATH for models after SFT on Code LLaMA 7B
using series subsets of DG[k] +M [and][ D]M[k] [with different MATH data amount.]
**3** **Expand Boundary with Problems**
**3.1** **Overview**
In this section, for RQ2, we consider expanding this
ability boundary by introducing additional problems. We first explore methods to enhance weak
ability, and then focus specifically on OOD ability, numerical robustness, and further extending the
model’s existing ability.
In Section 3.2, we examine whether the model’s
weak ability can be enhanced by providing corresponding data. Section 3.3 delves into the robustness of the model’s numerical abilities and the
issues present in a dataset, GSM-HARD. In Section 3.4, we develop an automated, high-accuracy
problem generator for constructing numerically perturbed data, demonstrating its practical application
value. Finally, in Section 3.5, we strive to achieve
a state-of-the-art model and discuss the potential
for further extending the model’s existing ability.
**3.2** **Enhance Weak Ability**
To address the issue of the weak ability of models
trained with the minimal optimal set of GSM8K
when applied to OOD set MATH, a straightforward
solution is to provide corresponding data.
Initially, following the same method described
in Section 2.2, we obtain a series of deduplicated
subsets DM[k] [constructed using the MATH dataset]
and subsequently conduct SFT on them. And, as
indicated by the green dashed line in Figure 4, we
In the comparative experiment, we replace random selection with clustering to obtain new subsets,
_Du[cluster,k]400_ and Eu[cluster,k]400 . We then conduct SFT on
Code LLaMA 7B using these subsets.
As shown in Table 1, the results on Eu[cluster,k]400 exhibited a consistent improvement, suggesting that
using clustering as a filter is viable. However, this
is not the case for Du[cluster,k]400 . We speculate that the
remaining similar paths after deduplication have
only a minor impact.
**2.5** **Correct Reasoning Ablation**
While ensuring the correctness of paths is intuitively sound, we also observe that some methods,
despite not guaranteeing correct answers for created problems, still yield reasonably good results.
Therefore, we aim to ablate the effect of ensuring
the correctness of paths.
During the acquisition of the initial set Eu400,
we retain all data, including those with incorrect
answers, resulting in Eu[total,k]400 [. After deduplicat-]
ing this set, we obtain Du[total,k]400 [. Subsequently, we]
generated subsets for k = 9 and k = ∞ through
random selection from these sets and conduct comparative experiments with these subsets.
As illustrated in Table 2, on GSM8K, not filtering out incorrect paths leads to a noticeable decline
in performance. However, this effect is not observed on S&A, which could be attributed to the
lower difficulty level of S&A.
-----
identify the minimal optimal set DM[k][=9] on MATH.
As expected, compared to the models trained on
_Du[k]400_ [originating from GSM8K, there is a signif-]
icant improvement in ability on MATH, and the
abilities on GSM8K and S&A, represented by the
blue and yellow dashed lines, are weaker.
Subsequently, we merge the subsets DM[k] [from]
the MATH dataset with the minimal optimal set
of GSM8K Du[k]400[=9] [, denoted as][ D]G[k] +M [. The experi-]
mental results on DG[k] +M [, as shown by the various]
solid lines, indicate that compared to DM[k] [, which]
provides the same amount of data from MATH,
there is a slight improvement in performance on
MATH, and a significant improvement on GSM8K
and S&A. Additionally, the local optimum point
for DG[k] +M [, similar to][ D]M[k] [, is also achieved at k=9.]
Similarly, compared to Du[k]400[=9] [,][ D]G[k][=9]+M [shows a]
slight decrease in performance on GSM8K, dropping from 72.6% to 70.3%, and a marginal decline on S&A, going from 73.6% to 76.5%. However, there is a significant improvement on MATH,
with a rise from 10.4% to 43.2%. Overall, DG[k][=9]+M
(102K) effectively combines the strengths of Du[k]400[=9]
(45K) and DM[k][=9] (57K), showcasing enhanced abilities on GSM8K, S&A, and MATH datasets.
We arrive at a fundamental conclusion: different
abilities of the model can be cumulatively enhanced
by mixing minimal optimal sets of corresponding
types of data. This finding provides a simple yet
effective method for enhancing the model’s weak
abilities by acquiring the corresponding datasets.
**3.3** **Is GSM-HARD Really Hard?**
Another ’weak ability’ of the DG[k][=9]+M [model is]
demonstrated on GSM-HARD (54.8% vs 70.3% on
GSM8K). This dataset is created by replacing the
numbers in GSM8K with larger ones (Gao et al.,
2023). Given that only the numerical values are
altered, the distribution of problems in Figure 2
remains almost identical. Based on the conclusions
from Section 2.3, such a significant discrepancy
should not occur, whether we consider it as IND or
Similar-Domain data. This leads us to questions: Is
GSM-HARD really hard? Is the model’s numerical
robustness indeed weak?
The first source of discrepancy arises from the
standards of ground truth. Due to the lack of meticulous design in the numerical values of the questions, some answers are not impractical, such as
receiving answers with decimals when asking about
quantities, or negative numbers when asking about
the amount decreased. In practice, these initial calculation results should be rounded or converted to
absolute values when providing answers, but GSMHARD directly annotates these initial calculation
results as the ground truth. We do not consider this
to be indicative of a gap in ability. Therefore, using the standards of GSM-HARD, and evaluating
based on the initial calculation results, the accuracy
rate increases to 63.3(+8.5)%.
The second source of the discrepancy is due to
errors in the ground truth annotation, stemming
from an imperfect automated annotation process in
GSM-HARD after modifying the problems. The
corresponding values in the code are not updated in
line with the changes in the numerical values in the
problems, thus leading to execution with retained
incorrect results as ground truth. We review the
first 50 samples where the DG[k][=9]+M [model make]
incorrect inferences and discover 25 errors in the
ground truth annotations. We can estimate that the
remaining gap, 70.3% - 63.3% = 7% < (1 - 63.3%) *
(25/50) * 63.3% can be covered by these annotation
errors. Finally, we conjecture that GSM-HARD is
not really hard and the numerical robustness issue
is no longer prevalent in today’s LLMs.
**3.4** **Auto Problem Generator**
Considering this, developing an Auto Problem Generator capable of reliably producing data similar to
GSM-HARD is meaningful. Such a generator can
be used to test the numerical robustness of models.
Additionally, it can also be utilized in educational
applications to assess students’ abilities.
Auto Problem Generator follows these steps:
1) Generate the deduplicated subset Dtest,u400
from the seed dataset, the test split of GSM8K,
following the method in Section 2.3.
2) For each question, extract the reasoning path
with the highest repetition as the main path and
separate the remaining path as the remain paths.
3) Extract numbers from questions using template matching and modify them with function f (·).
4) Modify the corresponding numbers in the
code of the main path and execute it to obtain the
answer Amain.
5) If the code execution fails or Amain < 0,
modify the numbers again with 50 times limit.
6) Repeat step 4 on the remaining paths and
obtain the answer set Aremains.
7) If all elements in Aremains are identical to
_Amain, then we believe Amain is correct._
-----
|Model|GSM8K SVAMP ASDiv MATH|GSM8K SVAMP ASDiv MATH|
|---|---|---|
||7B|13B|
|LLaMA-2 LLaMA-2 SFT LLaMA-2 RFT WizardMath MAmmoTH MetaMath MathCoder-L MathCoder-CL ToRA TORA-CODE MMOS MMOS-CODE MMOS-Min-CODE|13.3 38.0 50.7 4.1 41.3 31.9 47.4 7.2 50.3 - - - 54.9 57.3 59.1 10.7 53.6 67.7 - 31.5 66.5 - - 19.8 64.2 71.5 - 23.3 67.8 70.7 - 30.2 68.8 68.2 73.9 40.1 72.6 70.4 78.7 44.6 69.9 73.4 76.8 40.2 73.9 76.4 78.6 44.3 70.3 72.5 76.7 44.6|24.3 43.1 56.3 6.3 51.1 46.3 58.6 9.2 55.4 - - - 63.9 64.3 65.8 14.0 62.0 72.4 - 34.2 72.3 - - 22.4 72.6 76.9 - 29.9 74.1 78.0 - 35.9 72.7 72.9 77.2 43.0 75.8 75.7 81.4 48.1 74.8 77.0 80.0 43.2 77.1 77.5 81.9 48.1 - - - -|
Table 3: Comparison of test set accuracy on 4 datasets for LLaMA-2 and Code LLaMA 7B/13B based models.
8) Combine the correct Amain with the modified
questions to form the generated dataset P.
We apply the Distribution Perturbation (Xu et al.,
2022) on numerical values with the following function f (n) with µ=5, σ=1 and µ=1000, σ=300 to
create datasets P5 and P1000,
_f_ (n) = n + ⌊X⌋, X ∼N (µ, σ[2])
that N represents normal distribution. We manually review the first 100 QA pairs in P5 and achieve
a 98% accuracy rate, with only two questions having incorrectly annotated answers. A detailed analysis of these errors and their reasons can be found
in the Appendix E.
Thus, we have successfully developed a highquality Auto Problem Generator, which can be used
for testing the numerical robustness of models as
well as for educational application.
**Numerical Robustness represents a model’s**
consistent ability to handle different types of numerical values. Distribution Perturbation, as applied in GSM-HARD, P5, and P1000, is one such
example. We evaluate P5 and P1000 with the model
trained on Du[k]400[=9] [with only GSM8K data. The ex-]
perimental results show 73.8% on GSM8K, 72.1(1.7)% on P5 and 70.1(-3.7)% on P1000.
Then, employing the same approach, we pro_′_
duce P1000 [using the train split of GSM8K and]
include it in our training data. However, the results show tiny improvement, achieving 73.2% on
GSM8K, 72.6(-0.6)% on P5 and 70.4(-2.8)% on
_P1000. Considering the results of both sets of exper-_
iments, since providing corresponding data does
not enhance ability, we infer that the discrepancies
in P1000 are more likely due to annotation issues
caused by the inclusion of large numbers.
We also experiment with other numerical per
turbation approaches including Language Perturbation and Noise Perturbation. Language Perturbation does not entail changes to the answers and
simply involves converting numerical values into
their English word representations. This has led to
a slight improvement in the model’s performance.
Noise Perturbation introduces noise by adding decimal parts to the numerical values. The conclusions
drawn from this method are similar to those from
Distribution Perturbation. Overall, we conclude
that current LLMs no longer face significant issues
with numerical robustness.
**3.5** **Expand Existing Ability**
After utilizing all data from GSM8K and MATH,
we try to further expand existing ability in the absence of corresponding data, As shown in Figure
2, dataset TAL-SCQ displays query embeddings
that overlap with GSM8K and MATH. We generate
its minimal optimal set and merge it with DG[k][=9]+M [,]
denoted as DG+M +T . Similarly, we conduct SFT
on Code LLaMA 7B and achieve an accuracy of
73.9(+3.6)% on GSM8K, 77.5(+1.0)% on S&A,
and 44.3(+1.1)% on MATH. We conclude that an
overlapping dataset can continue to enhance the
model’s existing ability in the absence of corresponding data.
**3.6** **MMOS’ Advantage**
Our data strategy MMOS’ advantage stands for
two aspects, higher performance and lower construction costs. 1) The results, as shown in Table
3, indicate that our model using MMOS DG+M +T
achieves most SOTA performance. 2) When constructing the initial set, n-sampling on GPT-4 is
costly. Sampling 20 reasoning paths for each
seed question of DG+M +T will exceed a cost of
-----
$10,000, and additional learning costs are required
for post-processing using various methods. In contrast, MMOS can directly utilize corresponding
method models for sampling, avoiding these issues.
The sampled data will possess higher quality and
lower diversity. Furthermore, we also attempt to
significantly reduce computational costs by sampling 100 solutions for 19k seed questions using
only a 7B model. This can be completed within
12 hours on 8 A100 40G GPUs. This approach
yields about 30% amount of the GSM8K reasoning
paths and 90% for MATH, possibly because simpler problems are more prone to repetition. The
model obtained, MMOS-Min-CODE, also demonstrates satisfactory performance.
**4** **Related Work**
**4.1** **LLM for Math Reasoning**
**Prompt based methods activate the emergent**
abilities without training. A significant breakthrough comes from Chain-of-thought prompting
(CoT) (Wei et al., 2022b), which enhances the
ability of LLMs to tackle complex reasoning by
using explicit reasoning steps. The least-to-most
prompting strategy (Zhou et al., 2023) deconstructs
complex problems into a series of simpler subproblems, which are then solved sequentially. Program of thoughts prompting (Chen et al., 2022a)
and program-aided language models (Gao et al.,
2023) address the limited numerical abilities of
LLMs and utilize LLMs solely for understanding
problems and generating programs, while offloading computation to an external Python interpreter.
**Decoding related methods focus on enhanc-**
ing performance by replacing the greedy decoding
strategy during the inference stage. (Wang et al.,
2023b) samples a diverse set of reasoning paths
and selects the most consistent answer, while (Xie
et al., 2023) proposes a decoding algorithm that
integrates self-evaluation guidance through the use
of stochastic beam search.
**Supervised Fine-tuning (SFT) based meth-**
ods are designed to enhance the math reasoning
abilities of open-source models such as LLaMA
(Touvron et al., 2023a), LLaMA2 (Touvron et al.,
2023b), and Code LLaMA (Rozière et al., 2023),
while ensuring transparency. Current methods (Yu
et al., 2023; Wang et al., 2023a) largely utilize various prompt-based approaches, employing GPT-4
(OpenAI, 2023) or other open-source models, to
generate reasoning steps as training datasets based
on original QA in various datasets like GSM8k
(Cobbe et al., 2021), MATH (Hendrycks et al.,
2021). These generated reasoning steps can either be in natural language (rationales) (Zelikman
et al., 2022) or a combination with program (Yue
et al., 2023; Gou et al., 2023).
**4.2** **Supervised Data Augmentation**
**Response augmentation approaches (Luo et al.,**
2023; Gou et al., 2023) involve employing techniques such as nucleus sampling (top-p sampling)
(Holtzman et al., 2020) and combining inferences
from models of varying sizes, with the aim of enlarging the amount of generated reasoning steps
(Zhu et al., 2023). These methods generally adhere
to an intuitive understanding (Ni et al., 2022) that
fine-tuned models are prone to biases towards a
limited set of reference solutions.
**Query augmentation methods focus on modify-**
ing existing questions to generate new ones. Li et al.
(2023) finds that the diversity and complexity of
problems contribute positively to performance, and
Yu et al. (2023) believes that bootstrapping questions can provide multiple perspectives of metaknowledge, crucial for covering more unseen scenarios and enabling stronger generalization. Earlier
researches applied Named Entity Recognition or
Regular Expression matching to build templates
for augmenting questions (Li et al., 2022). Xu et al.
(2022) focused on categorizing questions based on
numerical abilities and designing numerical perturbations.
**5** **Conclusion**
We explore a general data strategy for supervised
data to help optimize and expand math reasoning
ability. Firstly, we ascertain the ability boundary
related to the augmentation of reasoning paths by
identifying the minimal optimal set of these paths,
with a focus on maximizing the data’s potential.
Secondly, we corroborate the premise that different abilities of the model can be collectively enhanced by amalgamating minimal optimal sets of
data, each corresponding to specific types of information. Our models achieve SOTA performance on
series base models with much lower construction
costs. Additionally, we uncover that LLMs currently do not exhibit a significant lack of numerical
robustness. Moreover, we introduce an Auto Problem Generator, designed for testing the robustness
of models and for use in educational applications.
-----
**Limitations**
The limitations of our paper include the following
aspects:
**Datasets and Models. In our research, we use**
only three datasets to create a mix of minimal optimal sets as training data. However, we are uncertain whether the two conclusions drawn in Section
4 – that different abilities of the model can be cumulatively enhanced by mixing minimal optimal
sets of corresponding types of data, and that an
overlapping dataset can continue to enhance the
model’s ability in the absence of corresponding
data – would still hold true with the introduction
of more and larger datasets. Additionally, we are
also unsure if these conclusions would apply to
larger-scale models, such as the 70B model.
**Sampling Bias.** Our conclusions regarding
the numerical robustness of the model, the GSMHARD dataset and the Auto Problem Generator are
based on our numerical analysis of accuracy and
results from sample checks. This approach may
introduce bias.
**Ethical Statements**
We claim from various aspects that our work is free
of ethical risks:
1) Our research utilizes open-source models like
LLaMA-2 and Code LLaMA and open datasets,
and we strictly adhere to their licensing protocols.
2) Despite providing a new auto problem generator, its functionality is confined to numerical perturbation derived from open-source datasets. We
endeavour to prevent the generation of illogical
problems and the dissemination of inappropriate information resulting from numerical perturbations.
3) During the writing process, we used GPT4 to
translate and correct grammatical errors, and the
text was human-checked and rewritten to ensure
that there were no ethical issues.
4) Our experiments are designed to be resourceefficient, requiring minimal compute time and
power.
**References**
Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu,
Clemens Winter, Christopher Hesse, Mark Chen, Eric
Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess,
Jack Clark, Christopher Berner, Sam McCandlish,
Alec Radford, Ilya Sutskever, and Dario Amodei.
[2020. Language Models are Few-Shot Learners.](http://arxiv.org/abs/2005.14165)
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
[William W. Cohen. 2022a. Program of Thoughts](http://arxiv.org/abs/2211.12588)
[Prompting: Disentangling Computation from Rea-](http://arxiv.org/abs/2211.12588)
[soning for Numerical Reasoning Tasks.](http://arxiv.org/abs/2211.12588)
Yanda Chen, Ruiqi Zhong, Sheng Zha, George Karypis,
and He He. 2022b. [Meta-learning via Language](http://arxiv.org/abs/2110.07814)
[Model In-context Tuning.](http://arxiv.org/abs/2110.07814)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training Verifiers to Solve Math Word Prob-](http://arxiv.org/abs/2110.14168)
[lems.](http://arxiv.org/abs/2110.14168)
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Gra[ham Neubig. 2023. PAL: Program-aided Language](http://arxiv.org/abs/2211.10435)
[Models.](http://arxiv.org/abs/2211.10435)
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen,
Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu
[Chen. 2023. ToRA: A Tool-Integrated Reasoning](http://arxiv.org/abs/2309.17452)
[Agent for Mathematical Problem Solving.](http://arxiv.org/abs/2309.17452)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
[Jacob Steinhardt. 2021. Measuring Mathematical](http://arxiv.org/abs/2103.03874)
[Problem Solving With the MATH Dataset.](http://arxiv.org/abs/2103.03874)
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and
[Yejin Choi. 2020. The Curious Case of Neural Text](https://arxiv.org/abs/1904.09751)
[Degeneration.](https://arxiv.org/abs/1904.09751)
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
[Regina Barzilay. 2014. Learning to Automatically](https://doi.org/10.3115/v1/P14-1026)
[Solve Algebra Word Problems. In Proceedings of the](https://doi.org/10.3115/v1/P14-1026)
_52nd Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
271–281, Baltimore, Maryland. Association for Computational Linguistics.
Chengpeng Li, Zheng Yuan, Guanting Dong, Keming
Lu, Jiancan Wu, Chuanqi Tan, Xiang Wang, and
[Chang Zhou. 2023. Query and response augmenta-](http://arxiv.org/abs/2310.05506)
[tion cannot help out-of-domain math reasoning gen-](http://arxiv.org/abs/2310.05506)
[eralization.](http://arxiv.org/abs/2310.05506)
Jing Li, Aixin Sun, Jianglei Han, and Chenliang Li.
[2022. A Survey on Deep Learning for Named Entity](https://doi.org/10.1109/TKDE.2020.2981314)
[Recognition. IEEE Transactions on Knowledge and](https://doi.org/10.1109/TKDE.2020.2981314)
_Data Engineering, 34(1):50–70._
Pan Lu, Liang Qiu, Wenhao Yu, Sean Welleck, and
[Kai-Wei Chang. 2022. A survey of deep learning for](https://arxiv.org/abs/2212.10535)
[mathematical reasoning.](https://arxiv.org/abs/2212.10535)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. 2023.
-----
[WizardMath: Empowering Mathematical Reasoning](http://arxiv.org/abs/2308.09583)
[for Large Language Models via Reinforced Evol-](http://arxiv.org/abs/2308.09583)
[Instruct.](http://arxiv.org/abs/2308.09583)
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
[2020a. A Diverse Corpus for Evaluating and De-](https://doi.org/10.18653/v1/2020.acl-main.92)
[veloping English Math Word Problem Solvers. In](https://doi.org/10.18653/v1/2020.acl-main.92)
_Proceedings of the 58th Annual Meeting of the Associ-_
_ation for Computational Linguistics, pages 975–984,_
Online. Association for Computational Linguistics.
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su.
[2020b. A Diverse Corpus for Evaluating and De-](https://doi.org/10.18653/v1/2020.acl-main.92)
[veloping English Math Word Problem Solvers. In](https://doi.org/10.18653/v1/2020.acl-main.92)
_Proceedings of the 58th Annual Meeting of the Associ-_
_ation for Computational Linguistics, pages 975–984,_
Online. Association for Computational Linguistics.
Ansong Ni, Jeevana Priya Inala, Chenglong Wang, Oleksandr Polozov, Christopher Meek, Dragomir Radev,
[and Jianfeng Gao. 2022. Learning Math Reason-](https://arxiv.org/abs/2205.14318)
[ing from Self-Sampled Correct and Partially-Correct](https://arxiv.org/abs/2205.14318)
[Solutions.](https://arxiv.org/abs/2205.14318)
[OpenAI. 2023. GPT-4 Technical Report.](https://doi.org/10.48550/arXiv.2303.08774)
Arkil Patel, Satwik Bhattamishra, and Navin Goyal.
[2021. Are NLP Models really able to Solve Simple](https://doi.org/10.18653/v1/2021.naacl-main.168)
[Math Word Problems? In Proceedings of the 2021](https://doi.org/10.18653/v1/2021.naacl-main.168)
_Conference of the North American Chapter of the_
_Association for Computational Linguistics: Human_
_Language Technologies, pages 2080–2094, Online._
Association for Computational Linguistics.
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle,
Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom
Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish
Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal
Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier,
[Thomas Scialom, and Gabriel Synnaeve. 2023. Code](http://arxiv.org/abs/2308.12950)
[Llama: Open Foundation Models for Code.](http://arxiv.org/abs/2308.12950)
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aurelien Rodriguez, Armand Joulin, Edouard
Grave, and Guillaume Lample. 2023a. [LLaMA:](http://arxiv.org/abs/2302.13971)
[Open and Efficient Foundation Language Models.](http://arxiv.org/abs/2302.13971)
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023b. Llama 2: Open Foundation and](http://arxiv.org/abs/2307.09288)
[Fine-Tuned Chat Models.](http://arxiv.org/abs/2307.09288)
[Shyam Upadhyay and Ming-Wei Chang. 2017. Anno-](https://aclanthology.org/E17-1047)
[tating Derivations: A New Evaluation Strategy and](https://aclanthology.org/E17-1047)
[Dataset for Algebra Word Problems. In Proceedings](https://aclanthology.org/E17-1047)
_of the 15th Conference of the European Chapter of_
_the Association for Computational Linguistics: Vol-_
_ume 1, Long Papers, pages 494–504, Valencia, Spain._
Association for Computational Linguistics.
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun
Luo, Weikang Shi, Renrui Zhang, Linqi Song,
Mingjie Zhan, and Hongsheng Li. 2023a. [Math-](http://arxiv.org/abs/2310.03731)
[Coder: Seamless Code Integration in LLMs for En-](http://arxiv.org/abs/2310.03731)
[hanced Mathematical Reasoning.](http://arxiv.org/abs/2310.03731)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
Denny Zhou. 2023b. [Self-Consistency Improves](http://arxiv.org/abs/2203.11171)
[Chain of Thought Reasoning in Language Models.](http://arxiv.org/abs/2203.11171)
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,
Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, Ed H.
Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy
[Liang, Jeff Dean, and William Fedus. 2022a. Emer-](http://arxiv.org/abs/2206.07682)
[gent Abilities of Large Language Models.](http://arxiv.org/abs/2206.07682)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed H Chi, Quoc V Le,
[and Denny Zhou. 2022b. Chain-of-Thought Prompt-](http://arxiv.org/abs/2201.11903)
[ing Elicits Reasoning in Large Language Models.](http://arxiv.org/abs/2201.11903)
Yuxi Xie, Kenji Kawaguchi, Yiran Zhao, Xu Zhao, Min[Yen Kan, Junxian He, and Qizhe Xie. 2023. Self-](http://arxiv.org/abs/2305.00633)
[evaluation guided beam search for reasoning.](http://arxiv.org/abs/2305.00633)
Jialiang Xu, Mengyu Zhou, Xinyi He, Shi Han, and
[Dongmei Zhang. 2022. Towards Robust Numerical](https://arxiv.org/abs/2211.07455)
[Question Answering: Diagnosing Numerical Capa-](https://arxiv.org/abs/2211.07455)
[bilities of NLP Systems.](https://arxiv.org/abs/2211.07455)
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo
[Li, Adrian Weller, and Weiyang Liu. 2023. Meta-](http://arxiv.org/abs/2309.12284)
[Math: Bootstrap Your Own Mathematical Questions](http://arxiv.org/abs/2309.12284)
[for Large Language Models.](http://arxiv.org/abs/2309.12284)
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and
[Jingren Zhou. 2023. Scaling Relationship on Learn-](http://arxiv.org/abs/2308.01825)
[ing Mathematical Reasoning with Large Language](http://arxiv.org/abs/2308.01825)
[Models.](http://arxiv.org/abs/2308.01825)
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
[2023. MAmmoTH: Building Math Generalist Mod-](http://arxiv.org/abs/2309.05653)
[els through Hybrid Instruction Tuning.](http://arxiv.org/abs/2309.05653)
-----
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah D
[Goodman. 2022. Star: Bootstrapping reasoning with](https://arxiv.org/abs/2203.14465)
[reasoning.](https://arxiv.org/abs/2203.14465)
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc Le, and Ed Chi.
[2023. Least-to-Most Prompting Enables Complex](http://arxiv.org/abs/2205.10625)
[Reasoning in Large Language Models.](http://arxiv.org/abs/2205.10625)
Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang,
Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and Yu[jiu Yang. 2023. Solving Math Word Problems via](https://doi.org/10.18653/v1/2023.acl-long.245)
[Cooperative Reasoning induced Language Models.](https://doi.org/10.18653/v1/2023.acl-long.245)
In Proceedings of the 61st Annual Meeting of the
_Association for Computational Linguistics (Volume_
_1: Long Papers), pages 4471–4485, Toronto, Canada._
Association for Computational Linguistics.
-----
**A** **Datasets**
In this paper, we have used 6 datasets, including: GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al.,
2021), GSM-HARD (Gao et al., 2023), SVAMP (Patel et al., 2021), ASDiv (Miao et al., 2020b) and
TAL-SCQ5K.
In terms of difficulty, by rough estimation:
SVAMP ≈ ASDiV < GSM8K ≈ GSM-HARD < TAL-SCQ5k < MATH
with ASDiV as a diversed dataset covering problem types taught in elementary school; SVAMP as a
structural modified version of a subset of ASDiv; GSM8K being an immense dataset covering grade
school problems, with 2-8 steps; GSM-HARD built upon GSM8K, replacing numbers with less-common
large numbers; TAL-SCQ5K containing primary, junior high and high school level mathematical topics;
MATH full of challenging competition mathematics problems which requires a strong mathematical
background to perform well on. Among which, MATH dataset and TAL-SCQ5K dataset further process
notations of difficulty levels.
|Dataset|Num|Example Q&A|
|---|---|---|
|GSM8K|Train: 7473 Test: 1319|question: In a dance class of 20 students, 20% enrolled in contemporary dance, 25% of the remaining enrolled in jazz dance, and the rest enrolled in hip-hop dance. What percentage of the entire students enrolled in hip-hop dance? answer: There are 20 x 20/100 = «20*20/100=4»4 students who enrolled in contemporary dance. So, 20 - 4 = «20-4=16»16 students are enrolled in either jazz or hip-hop dance. There are 16 x 25/100 = «16*25/100=4»4 students who enrolled in jazz dance. Hence, 16 - 4 = «16-4=12»12 students enrolled in hip- hop dance. This is 12/20 x 100% = 60% of the entire students. #### 60|
|MATH|Train: 7500 Test: 5000|question: How many vertical asymptotes does the graph of $ y=\frac {2}{xˆ2+x-6}$ have? answer: The denominator of the rational function factors into $xˆ2+x-6=(x-2)(x+3)$. Since the numerator is always nonzero, there is a vertical asymptote whenever the denominator is $0$, which occurs for $x = 2$ and $x = -3$. Therefore, the graph has $\boxed{2}$ vertical asymptotes.|
|GSM-HARD|Test: 1319|input: A robe takes 2287720 bolts of blue fiber and half that much white fiber. How many bolts in total does it take? code: def solution(): """A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?""" blue_fiber = 2287720 white_fiber = blue_fiber / 2 total_fiber = blue_fiber + white_fiber result = total_fiber return result target: 3431580.0|
-----
|Dataset|Num|Example Q&A|
|---|---|---|
|SVAMP|Test: 1000|Body: The Razorback t-shirt shop makes $ 78 dollars off each t-shirt sold. During the Arkansas game and the Texas tech game they sold a total of 186 t-shirts. If they sold 172 t-shirts during the Arkansas game Question: How much money did they make from selling the t-shirts during the Texas tech game? Equation: ( 78.0 * ( 186.0 - 172.0 ) ) Answer: 1092.0|
|ASDiv|Test: 2215|body: Robert wants to practice goal kicks for soccer. He decides to have 98 kicks before going home from the park. He takes 43 kicks before taking a break to get a drink of water. He then takes another 36 kicks. question: How many more kicks does he need to make before he goes home? equation: 98-43-36=19 answer: 19 (kicks)|
|TAL-SCQ|5000|problem: If $n$ is an even positive integer, the double factorial notation $n!!$ represents the product of all the even integers from $2$ to $n$. For example, $8!!=2\\cdot4\\cdot6\\cdot8$. What is the units digit of the following sum? $2!!+4!!+6!!+\\cdot\\cdot\\cdot+2018!!+2020!!+2022!!$ solution: Answer: $$2$$|
Table 4: Examples of datasets in their original format.
-----
**B** **Relationships of k & N**
Figure 5 illustrates the relationships of the number of reasoning paths and the data amounts of the
respective Du400 and Eu400.
We select multiple points from Du400 at regular intervals based on data amount. Simultaneously, we
choose corresponding points from Eu400 with similar data amounts to ensure consistence. The statistic
that the relationships of the number of reasoning paths and the data amount is detailed in Table 5 and 6.
Figure 5: The relationships of k & N.
|k|1 2 3 5 7 9 12 15 20 27 40 ∞|
|---|---|
|N|7457 14344 20225 30179 38150 44771 52857 59261 67281 74643 82180 89530|
Table 5: Extract subsets from relationships of k & N of Du400 for experiments.
|k|1 2 4 8 12 24 36 48|
|---|---|
|N|7457 14911 29810 59603 89386 178707 268003 357295|
Table 6: Extract subsets from relationships of k & N of Eu400 for experiments.
-----
**C** **Detailed Experiment Setting**
**Generate Deduplicated Datasets**
We spent 4 days generating both Du400, DM and the deduplicated dataset of TAL-SCQ in Section 2.3,
3.2 and 3.5 which is formed by employing four pre-trained models: ToRA-CODE 7B/13B/34B and ToRA
70B on the GSM8K, MATH and TAL-SCQ seperately, these models sample 100 reasoning paths each
with temperature 0.9.
**Training Models**
We conducted SFT on Code LLaMA 7B using various deduplicated dataset and their subsets in Section
2.4, 2.5, 3.3 and 3.4. Addtionally we conducted SFT on LLaMA-2 7B/13B for a horizontal comparison in
Section 3.5.
We used a learning rate of 2e-5 with a 3% warm-up period for 1 epoch and a global batch size of 128 on
NVIDIA A100 40G GPUs. We trained all models with DeepSpeed ZeRO Stage3 and Flash-Attention 2.
Apart from validating the effectiveness of the deduplication algorithm, where the random selection
process with seeds set to 0 and 42 and then averaging the inference results, all other training and inference
processes used a seed of 0.
The training sessions were completed within 1 day, with an average training duration of approximately
5 hours. The average evaluation time is less than 10 minutes.
-----
**D** **Case Study: Actual Distinct Solutions**
To validate the effectiveness of deduplication and using clustering as a filter, we conduct a case study
focusing on the relationship of reasoning paths and their problems’ actual distinct solutions.
In the deduplicated subset Du400 of the GSM8K dataset, we select the first question that has more
than 15 reasoning paths, which has 43 reasoning paths for this problem in fact. Next, we utilize random
selection and clustering as a filter to derive the subsets Du[k]400[=15] [and][ D]u[cluster,k]400 [=15]. We then separately
analyze the 15 reasoning paths in these two subsets for the corresponding problem to categorize their
actual distinct solutions on Table 7 and 8.
The question is formulated as follows:
_Tina makes $18.00 an hour. If she works more than 8 hours per shift, she is eligible for overtime, which_
_is paid by your hourly wage + 1/2 your hourly wage. If she works 10 hours every day for 5 days, how_
_much money does she make?_
Upon human analysis of this question, 10 distinct solutions have been summarized. These solutions
are categorized based on whether the default daily salary is the same, whether the default working hours
exceed the regular working hours, and how the total salary is calculated.
A Assuming that the daily wage is different, calculate the total wage by iterating over each day’s wage
and summing them in a loop.
B Assuming that the daily wage remains the same and that the working hours exceed the regular hours.
Calculate the regular wage and the overtime wage for five days separately, then sum them up.
C Assuming that the daily wage remains the same and that the working hours exceed the regular hours.
Calculate the regular hours worked and the overtime hours worked for five days separately, then sum
the total wages.
D Assuming that the daily wage remains the same and that the working hours exceed the regular hours.
Calculate daily wages, then sum them up.
E Assuming that the daily wage remains the same and that the Min and Max functions are used to
avoid situations where the working hours are smaller than the regular hours. Calculate the regular
wage and the overtime wage for five days separately, then sum them up.
F Assuming that the daily wage remains the same and that the Min and Max functions are used to
avoid situations where the working hours are smaller than the regular hours. Calculate the regular
hours worked and the overtime hours worked for five days separately, then sum the total wages.
G Assuming that the daily wage remains the same and that the Min and Max functions are used to
avoid situations where the working hours are smaller than the regular hours. Calculate daily wages,
then sum them up.
H Assuming that the daily wage remains the same and determine if the working hours exceed the
regular hours. Calculate the regular wage and the overtime wage for five days separately, then sum
them up.
I Assuming that the daily wage remains the same and determine if the working hours exceed the
regular hours. Calculate the regular hours worked and the overtime hours worked for five days
separately, then sum the total wages.
J Assuming that the daily wage remains the same and determine if the working hours exceed the
regular hours. Calculate daily wages, then sum them up.
-----
Table 7: Examples for Clustering as a Filter’s Actual Solutions.
**ID** **Completion** **Type**
```
```python
```
def total_salary():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_days = 5
overtime_hours = hours_per_shift - 8
1 overtime_wage = hourly_wage + 0.5 * hourly_wage J
salary_per_day = hourly_wage * 8 + overtime_wage * overtime_hours
total_salary = salary_per_day * num_days
return total_salary
total_salary = total_salary()
print(total_salary)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_shifts = 5
overtime_hours = hours_per_shift - 8
overtime_wage = hourly_wage + 0.5 * hourly_wage
2 pay_for_overtime = overtime_hours * overtime_wage * num_shifts B
pay_for_regular_hours = (8 * hourly_wage) * num_shifts
total_pay = pay_for_overtime + pay_for_regular_hours
return total_pay
total_pay = total_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_earnings():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
-----
hourly_wage = 18.00
overtime_hours = 10 - 8 # hours worked in excess of 8 hours
overtime_wage = hourly_wage + 0.5 * hourly_wage # overtime wage =
hourly wage + half hourly wage
daily_pay = 8 * hourly_wage + overtime_hours * overtime_wage
total_pay = daily_pay * 5 # total pay for 5 days
return total_pay
total_pay = calculate_earnings()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
overtime_threshold = 8
num_shifts = 5
total_pay = 0
for i in range(num_shifts):
if hours_per_shift <= overtime_threshold:
total_pay += hourly_wage * hours_per_shift
else:
overtime_hours = hours_per_shift - overtime_threshold
regular_hours = overtime_threshold
total_pay += (hourly_wage * regular_hours) + (hourly_wage * overtime_hours * 1.5)
return total_pay
total_pay = total_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_days = 5
overtime_hours = hours_per_shift - 8
-----
overtime_pay = hourly_wage + 0.5 * hourly_wage
regular_pay = hourly_wage * 8
total_pay = (overtime_hours * overtime_pay + regular_pay) * num_days
return total_pay
total_pay = calculate_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_earnings():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_days = 5
overtime_hours = hours_per_shift - 8 if hours_per_shift > 8 else 0
earnings = (hourly_wage * 8) + (hourly_wage * overtime_hours * 1.5)
total_earnings = earnings * num_days
return total_earnings
total_earnings = total_earnings()
print(total_earnings)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
total_earnings():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
overtime_wage = hourly_wage + 0.5 * hourly_wage
hours_per_shift = 10
num_days = 5
total_hours = hours_per_shift * num_days
overtime_hours = total_hours - 8 * num_days
earnings = (8 * num_days * hourly_wage) + (overtime_hours * overtime_wage)
return earnings
total_earnings = total_earnings()
print(total_earnings)
```
```
``` output
```
990.0
-----
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_day = 10
num_days = 5
overtime_hours = hours_per_day - 8 if hours_per_day > 8 else 0
overtime_pay = overtime_hours * (hourly_wage + 0.5 * hourly_wage)
regular_pay = (8 * hourly_wage) + overtime_pay
total_pay = regular_pay * num_days
return total_pay
total_pay = total_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_earnings():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18
hours_per_shift = 10
num_days = 5
overtime_hours = hours_per_shift - 8
overtime_wage = hourly_wage + 0.5 * hourly_wage
earnings = (8 * hourly_wage + overtime_hours * overtime_wage) *
num_days
return earnings
total_earnings = total_earnings()
print(total_earnings)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18
-----
hours_per_shift = 10
days_worked = 5
overtime_hours = hours_per_shift - 8
overtime_wage = hourly_wage + 0.5 * hourly_wage
regular_pay = hourly_wage * 8
10 overtime_pay = overtime_wage * overtime_hours
total_pay = (regular_pay + overtime_pay) * days_worked
return total_pay
total_pay = calculate_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_earnings():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
overtime_wage = hourly_wage + 0.5 * hourly_wage
hours_per_shift = 10
num_shifts = 5
11 earnings_straight_time = hourly_wage * 8 * num_shifts
earnings_overtime = overtime_wage * (hours_per_shift - 8) * num_shifts
total_earnings = earnings_straight_time + earnings_overtime
return total_earnings
total_earnings = total_earnings()
print(total_earnings)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_salary():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
overtime_wage = hourly_wage + 0.5 * hourly_wage
hours_per_shift = 10
days = 5
regular_hours = min(hours_per_shift, 8)
12 overtime_hours = max(hours_per_shift - 8, 0)
regular_pay = regular_hours * hourly_wage * days
overtime_pay = overtime_hours * overtime_wage * days
total_pay = regular_pay + overtime_pay
-----
return total_pay
total_pay = calculate_salary()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
overtime_wage = hourly_wage + 0.5 * hourly_wage
hours_per_day = 10
num_days = 5
overtime_hours = max(hours_per_day - 8, 0)
13 regular_hours = min(hours_per_day, 8)
total_pay = (regular_hours * hourly_wage + overtime_hours * overtime_wage) * num_days
return total_pay
total_pay = total_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_earnings():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
overtime_hours = 10
days = 5
regular_hours = 8
regular_earnings = hourly_wage * regular_hours * days
14 overtime_earnings = (hourly_wage + 0.5 * hourly_wage) * (overtime_hours
- regular_hours) * days
total_earnings = regular_earnings + overtime_earnings
return total_earnings
total_earnings = total_earnings()
print(total_earnings)
```
```
``` output
```
990.0
```
```
```
-----
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_shifts = 5
overtime_hours = hours_per_shift - 8
15 overtime_wage = hourly_wage + 0.5 * hourly_wage D
pay = (8 * hourly_wage + overtime_hours * overtime_wage) * num_shifts
return pay
total_pay = calculate_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
|Tina makes|990.0|dollars.|
|---|---|---|
```
Table 8: Examples for Random Selection’s Actual Solutions.
**ID** **Completion** **Type**
```
```python
```
def total_salary():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_days = 5
overtime_hours = hours_per_shift - 8
1 overtime_wage = hourly_wage + 0.5 * hourly_wage D
salary_per_day = hourly_wage * 8 + overtime_wage * overtime_hours
total_salary = salary_per_day * num_days
return total_salary
total_salary = total_salary()
print(total_salary)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_pay():
-----
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
overtime_threshold = 8
num_shifts = 5
total_pay = 0
for i in range(num_shifts):
if hours_per_shift <= overtime_threshold:
total_pay += hourly_wage * hours_per_shift
else:
overtime_hours = hours_per_shift - overtime_threshold
regular_hours = overtime_threshold
total_pay += (hourly_wage * regular_hours) +
(hourly_wage * overtime_hours * 1.5)
return total_pay
total_pay = total_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
overtime_wage = 0.5 * hourly_wage
hours_per_shift = 10
num_shifts = 5
overtime_hours = max(hours_per_shift - 8, 0)
regular_pay = hourly_wage * 8
overtime_pay = (hours_per_shift - 8) * (hourly_wage + overtime_wage)
total_pay = (regular_pay + overtime_pay) * num_shifts
return total_pay
total_pay = calculate_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_pay():
-----
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_shifts = 5
overtime_hours = hours_per_shift - 8
overtime_wage = hourly_wage + 0.5 * hourly_wage
pay_for_shifts = (8 * hourly_wage) + (overtime_hours * overtime_wage)
total_pay = num_shifts * pay_for_shifts
return total_pay
total_pay = calculate_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_earnings():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_days = 5
overtime_hours = hours_per_shift - 8
if overtime_hours > 0:
overtime_wage = hourly_wage + 0.5 * hourly_wage
overtime_earnings = overtime_hours * overtime_wage * num_days
else:
overtime_earnings = 0
regular_hours = 8
regular_wage = hourly_wage
regular_earnings = regular_hours * regular_wage * num_days
total_earnings = regular_earnings + overtime_earnings
return total_earnings
total_earnings = total_earnings()
print(total_earnings)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_earnings():
-----
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
overtime_wage = hourly_wage + 0.5 * hourly_wage
hours_per_day = 10
days_per_week = 5
overtime_hours = hours_per_day - 8 # overtime is any hours over 8
base_earnings = hourly_wage * 8 * days_per_week
overtime_earnings = overtime_wage * overtime_hours * days_per_week
total_earnings = base_earnings + overtime_earnings
return total_earnings
total_earnings = calculate_earnings()
print(total_earnings)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_day = 10
num_days = 5
overtime_hours = hours_per_day - 8 if hours_per_day > 8 else 0
overtime_pay = overtime_hours * (hourly_wage + 0.5 * hourly_wage)
regular_pay = (8 * hourly_wage) + overtime_pay
total_pay = regular_pay * num_days
return total_pay
total_pay = total_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_earnings():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_days = 5
overtime_hours = hours_per_shift - 8
-----
overtime_wage = hourly_wage + 0.5 * hourly_wage
8 earnings_regular = 8 * hourly_wage * num_days
earnings_overtime = overtime_hours * overtime_wage * num_days
total_earnings = earnings_regular + earnings_overtime
return total_earnings
total_earnings = calculate_earnings()
print(total_earnings)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_shifts = 5
overtime_hours = hours_per_shift - 8
9 overtime_wage = hourly_wage + 0.5 * hourly_wage
pay = (8 * hourly_wage + overtime_hours * overtime_wage) * num_shifts
return pay
total_pay = calculate_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_earnings():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
overtime_hours = hours_per_shift - 8
num_shifts = 5
10 overtime_wage = hourly_wage + 0.5 * hourly_wage
regular_earnings = 8 * hourly_wage * num_shifts
overtime_earnings = overtime_hours * overtime_wage * num_shifts
total_earnings = regular_earnings + overtime_earnings
return total_earnings
total_earnings = total_earnings()
print(total_earnings)
```
```
``` output
```
-----
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_shifts = 5
overtime_hours = hours_per_shift - 8
overtime_wage = hourly_wage + 0.5 * hourly_wage
11 regular_pay = hourly_wage * 8
overtime_pay = overtime_wage * overtime_hours
total_pay = (regular_pay + overtime_pay) * num_shifts
return total_pay
total_pay = total_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_earnings():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_shift = 10
num_shifts = 5
overtime_hours = hours_per_shift - 8
12 overtime_wage = hourly_wage + 0.5 * hourly_wage
earnings = (8 * hourly_wage) + (overtime_hours * overtime_wage)
return earnings * num_shifts
total_earnings = calculate_earnings()
print(total_earnings)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars."|
|---|---|---|
```
def calculate_salary():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
-----
hourly_wage = 18.00
overtime_wage = hourly_wage + 0.5 * hourly_wage
hours_per_shift = 10
num_shifts = 5
13 overtime_hours = hours_per_shift - 8 if hours_per_shift > 8 else 0
salary = (8 * hourly_wage + overtime_hours * overtime_wage) * num_shifts
return salary
salary = calculate_salary()
print(salary)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def calculate_pay():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
overtime_wage = hourly_wage + 0.5 * hourly_wage
hours_per_shift = 10
num_shifts = 5
overtime_hours = hours_per_shift - 8
14 regular_hours = 8
total_pay = (regular_hours * hourly_wage + overtime_hours * overtime_wage) * num_shifts
return total_pay
total_pay = calculate_pay()
print(total_pay)
```
```
``` output
```
990.0
```
```
```python
|Tina makes|990.0|dollars.|
|---|---|---|
```
def total_earnings():
"""Tina makes $18.00 an hour. If she works more than 8 hours per shift, she
is eligible for overtime, which is paid by your hourly wage + 1/2 your hourly
wage. If she works 10 hours every day for 5 days, how much money does she
make?"""
hourly_wage = 18.00
hours_per_day = 10
num_days = 5
overtime_hours = hours_per_day - 8 if hours_per_day > 8 else 0
15 overtime_wage = hourly_wage + 0.5 * hourly_wage
regular_earnings = hourly_wage * 8 * num_days
overtime_earnings = overtime_wage * overtime_hours * num_days
total_earnings = regular_earnings + overtime_earnings
-----
return total_earnings
total_earnings = total_earnings()
print(total_earnings)
```
```
``` output
```
990.0
```
```
```
|Tina makes|990.0|dollars.|
|---|---|---|
-----
**Error Analysis: Numerical Perturb to create P5**
Table 9: Wrong case for P5: Semantic Ambiguity
**Problem:**
Grandma Jones baked 5 apple pies for the fireman’s luncheon. She cut each pie into 8 pieces and set
the five pies out on the buffet table for the guests to serve themselves. At the end of the evening, after
the guests had taken and eaten their pieces of pie, there were 14 pieces of pie remaining. How many
pieces were taken by the guests?
**Completion:**
```
```python
```
def pie_pieces_taken():
"""Grandma Jones baked 5 apple pies for the fireman’s luncheon. She cut each pie into 8 pieces and
set the five pies out on the buffet table for the guests to serve themselves. At the end of the evening,
after the guests had taken and eaten their pieces of pie, there were 14 pieces of pie remaining. How
many pieces were taken by the guests?"""
pies_initial = 5
pieces_per_pie = 8
total_pie_pieces = pies_initial * pieces_per_pie
remaining_pie_pieces = 14
taken_pie_pieces = total_pie_pieces - remaining_pie_pieces
return taken_pie_pieces
taken_pie_pieces = pie_pieces_taken()
print(taken_pie_pieces)
```
```
```output
```
26
```
```
```
The guests took 26 pieces of pie.
**Perturbed Problem:**
Grandma Jones baked 10 apple pies for the fireman’s luncheon. She cut each pie into 13 pieces and set
the five pies out on the buffet table for the guests to serve themselves. At the end of the evening, after
the guests had taken and eaten their pieces of pie, there were 19 pieces of pie remaining. How many
pieces were taken by the guests?
**Perturbed Completion:**
```
```python
```
def pie_pieces_taken():
"""Grandma Jones baked 10 apple pies for the fireman’s luncheon. She cut each pie into 13 pieces
and set the five pies out on the buffet table for the guests to serve themselves. At the end of the evening,
after the guests had taken and eaten their pieces of pie, there were 19 pieces of pie remaining. How
many pieces were taken by the guests?"""
-----
pies_initial = 10
pieces_per_pie = 13
total_pie_pieces = pies_initial * pieces_per_pie
remaining_pie_pieces = 19
taken_pie_pieces = total_pie_pieces - remaining_pie_pieces
return taken_pie_pieces
taken_pie_pieces = pie_pieces_taken()
print(taken_pie_pieces)
```
```
```output
```
111
```
```
|The guests took|26|pieces of pie.|
|---|---|---|
```
Table 10: Wrong case for P5: Illogical Numerical Perturb
**Problem:**
A mechanic charges different rates to repair the tires of trucks and cars. For each truck tire that is
repaired, the mechanic will charge $60 and for each car tire that is repaired, the mechanic will charge
$40. On Thursday, the mechanic repairs 6 truck tires and 4 car tires. On Friday, the mechanic repairs
12 car tries and doesn’t repair any truck tires. How much more revenue did the mechanic earn on the
day with higher revenue?
**Completion:**
```
```python
```
def revenue]_difference():
"""A mechanic charges different rates to repair the tires of trucks and cars. For each truck tire that
is repaired, the mechanic will charge $60 and for each car tire that is repaired, the mechanic will charge
$40. On Thursday, the mechanic repairs 6 truck tires and 4 car tires. On Friday, the mechanic repairs
12 car tries and doesn’t repair any truck tires. How much more revenue did the mechanic earn on the
day with higher revenue?"""
truck_tire_cost = 60
car_tire_cost = 40
truck_tires_repaired_thursday = 6
car_tires_repaired_thursday = 4
car_tires_repaired_friday = 12
truck_tires_repaired_friday = 0
revenue_thursday = (truck_tire_cost * truck_tires_repaired_thursday) + (car_tire_cost *
car_tires_repaired_thursday)
revenue_friday = (truck_tire_cost - truck_tires_repaired_friday) + (car_tire_cost -
car_tires_repaired_friday)
revenue_difference = revenue_friday - revenue_thursday
return revenue_difference
revenue_difference = revenue_difference()
print(revenue_difference)
```
```
```output
```
-40
```
```
```
The mechanic earned 40 dollars less on Friday than on Thursday.
-----
**Perturbed Problem:**
A mechanic charges different rates to repair the tires of trucks and cars. For each truck tire that is
repaired, the mechanic will charge $65 and for each car tire that is repaired, the mechanic will charge
$45. On Thursday, the mechanic repairs 11 truck tires and 9 car tires. On Friday, the mechanic repairs
17 car tries and doesn’t repair any truck tires. How much more revenue did the mechanic earn on the
day with higher revenue?
**Perturbed Completion:**
```
```python
```
def revenue]_difference():
"""A mechanic charges different rates to repair the tires of trucks and cars. For each truck tire that
is repaired, the mechanic will charge $65 and for each car tire that is repaired, the mechanic will charge
$45. On Thursday, the mechanic repairs 11 truck tires and 9 car tires. On Friday, the mechanic repairs
17 car tries and doesn’t repair any truck tires. How much more revenue did the mechanic earn on the
day with higher revenue?"""
truck_tire_cost = 65
car_tire_cost = 45
truck_tires_repaired_thursday = 11
car_tires_repaired_thursday = 9
car_tires_repaired_friday = 17
truck_tires_repaired_friday = 0
revenue_thursday = (truck_tire_cost * truck_tires_repaired_thursday) + (car_tire_cost *
car_tires_repaired_thursday)
revenue_friday = (truck_tire_cost - truck_tires_repaired_friday) + (car_tire_cost -
car_tires_repaired_friday)
revenue_difference = revenue_friday - revenue_thursday
return revenue_difference
revenue_difference = revenue_difference()
print(revenue_difference)
```
```
```output
```
355
```
```
```
|The mechanic earned|355 −|dollars less on Friday than on Thursday.|
|---|---|---|
-----
| [
"Jiaqi, Han",
"Yezeng, Chen",
"Zui, Chen",
"Yi, Zhou",
"Zhijie, Huang",
"Ji, Qi"
] | 2024-02-23T00:00:00 | null | false | 0 | 1 | null | http://arxiv.org/abs/2403.00799 | https://arxiv.org/abs/2403.00799 | https://www.semanticscholar.org/paper/cad3a64f1cb3020747b8b381d72f9032677469dd |
An Evaluation Benchmark for Autoformalization in Lean4 | Large Language Models (LLMs) hold the potential to revolutionize autoformalization. The introduction of Lean4, a mathematical programming language, presents an unprecedented opportunity to rigorously assess the autoformalization capabilities of LLMs. This paper introduces a novel evaluation benchmark designed for Lean4, applying it to test the abilities of state-of-the-art LLMs, including GPT-3.5, GPT-4, and Gemini Pro. Our comprehensive analysis reveals that, despite recent advancements, these LLMs still exhibit limitations in autoformalization, particularly in more complex areas of mathematics. These findings underscore the need for further development in LLMs to fully harness their potential in scientific research and development. This study not only benchmarks current LLM capabilities but also sets the stage for future enhancements in autoformalization. | A novel evaluation benchmark designed for Lean4 is introduced, applying it to test the abilities of state-of-the-art LLMs, including GPT-3.5, GPT-4, and Gemini Pro, revealing that these LLMs still exhibit limitations in autoformalization, particularly in more complex areas of mathematics. | ## AN EVALUATION BENCHMARK FOR AUTOFORMAL### IZATION IN LEAN4
**Aryan Gulati*, Devanshu Ladsaria*, Shubhra Mishra*, Jasdeep Sidhu*, Brando Miranda**
Department of Computer Science
Stanford University
Stanford, CA 94305
_{aryangul, devanshu, shubhra, jasdeep6, brando9}@cs.stanford.edu_
ABSTRACT
Large Language Models (LLMs) hold the potential to revolutionize autoformalization. The introduction of Lean4, a mathematical programming language, presents
an unprecedented opportunity to rigorously assess the autoformalization capabilities of LLMs. This paper introduces a novel evaluation benchmark designed for
Lean4, applying it to test the abilities of state-of-the-art LLMs, including GPT3.5, GPT-4, and Gemini Pro. Our comprehensive analysis reveals that, despite
recent advancements, these LLMs still exhibit limitations in autoformalization,
particularly in more complex areas of mathematics. These findings underscore
the need for further development in LLMs to fully harness their potential in scientific research and development. This study not only benchmarks current LLM
capabilities but also sets the stage for future enhancements in autoformalization.
[Benchmark Page: HuggingFace](https://huggingface.co/datasets/shubhramishra/autoformalization-benchmark-lean4)
INTRODUCTION
Generating formal statements is tedious, but the impressive advances in LLMs’ capabilities show a
promising future for autoformalized, verifiable systems (Klein et al., 2018). Computer-formalized
mathematics has seen advances in many directions, including the rapid development of new
computer-interpretable mathematical languages. One such language is Lean4, the non backwardscompatible successor to Lean3. Given the differences between the two languages, a benchmark that
evaluates a LLM’s ability to autoformalize into Lean4 has become increasingly important.
**Contribution: In this paper, we propose a benchmark of 101 pairs of mathematical formal-informal**
statements across 17 different topics in math. Then, we manually evaluated three different state of
the art LLMs (GPT-3.5, GPT-4, and Gemini Pro) on the benchmark.
Many benchmarks have used the perplexity metric to evaluate autoformalizations (OpenAI; Azerbayev et al., 2023). However, this relies on string/pattern matching, which is not a very robust
measure of autoformalization, given the fact that LMs may generate correct formalizations that differ in structure or wording. In our paper, we evaluate autoformalizations on a 0-4 scale based on
correction effort, as proposed in (Jiang et al., 2023). Correction effort refers to the amount of necessary adjustments or modifications required to transform the generated formalization output of a
LLM into an accurate and fully correct Lean4 formalization. Additionally, we split the statements
into math topics, which lets our evaluation extend beyond an accuracy metric, providing a more
fine-grained understanding of how LLMs autoformalize, and where more work is still needed.
2 METHODOLOGY AND RESULTS
To assess the autoformalization capabilities of contemporary LLMs, we selected a dataset of 101
theorem statements from mathlib4, a comprehensive library of mathematical theorems formulated
*These authors contributed equally to this work
-----
Figure 1: Average Correction Efforts Across Topics
in Lean4. The dataset included a wide array of mathematical subjects (Appendix B) ensuring a
diverse and representative sample for our analysis. The dataset includes formal statements, their
corresponding natural language informalizations, and the specific mathematical topic.
We employed a zero-shot prompting approach with three advanced LLMs: GPT-3.5, GPT-4, and
Gemini Pro. This approach involved presenting each model with natural language statements from
our dataset and analyzing the formalized outputs they generated (Appendix C). We also streamlined
the evaluation process by trimming outputs to only include formal Lean statements.
Our evaluation methodology drew inspiration from (Jiang et al., 2023), employing a grading scale
ranging from 0 to 4. On this scale, a score of 0 indicates a flawless autoformalization, while a score
of 4 signifies an output requiring as much correction effort as formalizing a statement from scratch.
Our analysis revealed that the correction efforts for autoformalizations were similar among GPT-3.5
and GPT-4, averaging 2.238. Gemini Pro showed a slightly higher average effort of 2.248. Gemini
Pro boasts the most number of autoformalizations with scores of 0s and 1s. However despite this,
GPT-4 and Gemini Pro produced more instances with the maximum correction effort of 4 (Appendix
D). This is likely because as discussed in (Pichai & Hassabis, 2023), Gemini, with its natively multimodal design and recent training incorporating Lean4 data, performs better in reasoning tasks. This
is a step forward from GPT-4’s Mixture of Experts (MoE) design and earlier training phase, which
may have had less exposure to Lean4 (as evident from GPT-4’s misinterpretation of Lean4 capabilities in Appendix C). Both models surpass GPT-3.5, which relies on a monolithic architecture.
Figure 1 reveals performance disparities among LLMs across mathematical subjects, which suggests
that the LLMs’ performance is subject-dependent. For instance, all LLMs excelled in Information
Theory and Logic, but had trouble with category and model theory. We hypothesize that the frequency of these subjects on the internet is related to the performance of the LLM. Another potential
reason for the discrepancy between subjects might be attributed to the difficulties of autoformalization. Problems in category theory and model theory are harder to describe even in natural language,
so translating it to formal language is a more difficult task in itself. To improve our dataset, we
could label the difficulty of each problem statement to correct for correlation between problem- and
autoformalization-difficulty. The overall variance suggests that the LLMs’ performance is influenced by the subject matter of the theorem, pointing to potential avenues for future research.
3 CONCLUSION
Our research underscores the potential of LLMs in revolutionizing the field of formalization, with
implications extending across mathematics, computer science, and engineering. While LLMs can
substantially expedite research and development, our findings indicate that even the most sophisticated models currently fall short in achieving accurate autoformalization. This gap highlights the
unique opportunity presented by the development of Lean4, serving as a crucial testing ground for
enhancing LLM performance in autoformalization and automated theorem proving.
-----
URM STATEMENT
We acknowledge that all the authors of this work meets the URM criteria of ICLR 2024 Tiny Papers
Track.
REFERENCES
Zhangir Azerbayev, Bartosz Piotrowski, Hailey Schoelkopf, Edward W. Ayers, Dragomir Radev,
and Jeremy Avigad. Proofnet: Autoformalizing and formally proving undergraduate-level mathematics. 2023.
Albert Jiang, Wenda Li, and Mateja Jammik. Multilingual mathematical autoformalization, 2023.
Gerwin Klein, June Andronick, Matthew Fernandez, Ihor Kuz, Toby Murray, and Gernot Heiser.
Formally verified software in the real world. Commun. ACM, 61(10):68–77, sep 2018. ISSN
[0001-0782. doi: 10.1145/3230627. URL https://doi.org/10.1145/3230627.](https://doi.org/10.1145/3230627)
OpenAI. MiniF2F benchmark. [URL https://github.com/openai/miniF2F/tree/](https://github.com/openai/miniF2F/tree/main)
[main.](https://github.com/openai/miniF2F/tree/main)
Sundar Pichai and Demis Hassabis. Introducing gemini: Our largest and most capable ai model,
Dec 2023. [URL https://blog.google/technology/ai/google-gemini-ai/](https://blog.google/technology/ai/google-gemini-ai/#sundar-note)
[#sundar-note.](https://blog.google/technology/ai/google-gemini-ai/#sundar-note)
A DATASET
[The evaluation benchmark can be accessed via HuggingFace here.](https://huggingface.co/datasets/shubhramishra/autoformalization-benchmark-lean4)
B INSIDE THE DATASET
Topic Number of problems
Algebra 9
Analysis 8
Category Theory 6
Combinatorics 6
Computability 4
Field Theory 9
Geometry 8
Group Theory 6
Info. Theory 5
Linear Algebra 5
Logic 5
Model Theory 6
Number Theory 5
Probability 6
Rep. Theory 3
Set Theory 5
Topology 5
Table 1: The Number of Problems in the Dataset, Based on Topic
C SAMPLE PROMPTS AND OUTPUTS
**Prompt:**
"I am providing you with a mathematical statement
in natural language. I want you to formalize it in the Lean4
language. \n" + natural_language_statement
-----
**Output:**
{
"id": 8,
"subject": "Information Theory",
"natural_language_statement": "The Hamming distance of an
element to itself is always 0.",
"ground_truth_formalization": "theorem hammingDist_self (x : ∀
i, β i) : hammingDist x x = 0 := by%n rw [hammingDist,
card_eq_zero, filter_eq_empty_iff]%n exact fun _ _ H => H
rfl%n#align hamming_dist_self hammingDist_self",
"gpt_3.5_turbo_response": "import data.list.basic%n%nopen list
%n%n-- Define the Hamming distance function%ndef
hamming_distance {α : Type} [decidable_eq α] (xs ys : list
_α) : N :=%n (xs.zip ys).count (λ xy, xy.fst \neq xy.snd)%_
n%n-- Formalize the statement%ntheorem
hamming_distance_self_zero {α : Type} [decidable_eq \alpha
] (x : α) :%n hamming_distance [x] [x] = 0 :=%nbegin%n -Apply the definition of hamming_distance%n unfold
hamming_distance,%n -- Simplify the expression%n simp,%
nend",
"gpt_4_response": "In Lean4, the Hamming distance of an
element to itself being always 0 can be formalized as
follows:%n%n‘‘‘lean%nimport data.nat.basic -- import
necessary library%n%ndef hamming_distance (x y : N) : N
:=%n nat.popcount (x lxor y) -- definition of Hamming
distance%n%ntheorem hamming_self (n : N) :
hamming_distance n n = 0 :=%nbegin%n rw hamming_distance,%
n rw nat.lxor_self,%n exact nat.popcount_zero,%nend%n‘‘‘%n
%n. This code first imports the necessary library for
natural numbers. Then it defines the Hamming distance as
the population count (number of 1’s) in the binary
representation of the bitwise exclusive or (lxor) of two
numbers. The theorem ‘hamming_self‘ states that the
Hamming distance of a number to itself is always 0. The
proof is straightforward: the bitwise exclusive or of a
number with itself is always 0, and the population count
of 0 is 0.",
"gemini_pro_response": "theorem
hamming_distance_to_self_is_zero (α : Type) [decidable_eq
\alpha] (x : α) :%n hamming_distance x x = 0 :%n by refl"
}
**Scores given to Problem 8: GPT-3.5 - 1. GPT-4 - 1. Gemini Pro**
- 1. All autoformalizations were near perfect with minor
changes necessary, which led us to giving scores of 1 for
all models.
**Output:**
{
"id": 99,
"subject": "Category Theory",
"natural_language_statement": "An object is injective if its
preadditive Yoneda functor preserves finite colimits.",
"ground_truth_formalization": "theorem
injective_of_preservesFiniteColimits_preadditiveYonedaObj
(J : C)%n [hP : PreservesFiniteColimits (
preadditiveYonedaObj J)] : Injective J :=",
-----
"gpt_3.5_turbo_response": "import category_theory.category%
nimport category_theory.functor%nimport category_theory.
limits%n%nopen category_theory%n%nuniverses v u%n%
nvariables (C : Type u) [category.{v} C]%n%nstructure
injective_object (X : C) : Prop :=%n(
preadditive_yoneda_functor_preserves_colimits :%n _∀_ (J
: Type v) [small_category J] (F : J → C),%n limits.
preserves_colimits_of_shape (discrete.functor F) (yoneda
.obj X))%n%nNote: This formalization assumes that Lean4
has already imported the necessary libraries for
category theory. The statement is formalized using the ‘
injective_object‘ structure, which takes an object ‘X‘
in category ‘C‘ as a parameter. The structure contains a
property ‘preadditive_yoneda_functor_preserves_colimits
‘, which asserts that for any small category ‘J‘ and
functor ‘F : J → C‘, the preadditive Yoneda functor
preserves finite colimits. The ‘limits.
preserves_colimits_of_shape‘ function is used to express
this preservation property.",
"gpt_4_response": "Formalizing this statement in Lean4
requires a deep understanding of category theory and the
Lean4 language. Unfortunately, Lean4’s standard library
does not currently support category theory to the
extent required to formalize this statement. %n%nHowever
, here is a rough sketch of how you might start to
define these concepts in Lean4:%n%n‘‘‘lean%n-- First, we
need to define what a category is%nstructure Category
:=%n (Obj : Type)%n (Hom : Obj → Obj → Type)%n (comp
: Π {X Y Z : Obj}, Hom X Y → Hom Y Z → Hom X Z)%n (id :
Π X : Obj, Hom X X)%n (assoc : Π {W X Y Z : Obj} (f :
Hom W X) (g : Hom X Y) (h : Hom Y Z),%n comp (comp f g)
h = comp f (comp g h))%n (id_left : Π {X Y : Obj} (f :
Hom X Y), comp (id X) f = f)%n (id_right : Π {X Y : Obj}
(f : Hom X Y), comp f (id Y) = f)%n%n-- Then, we need
to define what a functor is%nstructure Functor (C D :
Category) :=%n (onObj : C.Obj → D.Obj)%n (onHom : Π {X
Y : C.Obj}, C.Hom X Y → D.Hom (onObj X) (onObj Y))%n (
functor_id : Π {X : C.Obj}, onHom (C.id X) = D.id (onObj
X))%n (functor_comp : Π {X Y Z : C.Obj} (f : C.Hom X Y)
(g : C.Hom Y Z),%n onHom (C.comp f g) = D.comp (onHom f
) (onHom g))%n%n-- Then, we need to define what a
preadditive category is%nstructure Preadditive (C :
Category) :=%n (add : Π {X Y : C.Obj}, C.Hom X Y → C.
Hom X Y → C.Hom X Y)%n (zero : Π {X Y : C.Obj}, C.Hom X
Y)%n (neg : Π {X Y : C.Obj}, C.Hom X Y → C.Hom X Y)%n
(sub : Π {X Y : C.Obj}, C.Hom X Y → C.Hom X Y → C.Hom X
Y)%n -- plus the axioms for an additive group...%n%n-Then, we need to define what a Yoneda functor is%ndef
Yoneda (C : Category) (X : C.Obj) : Functor C (
Preadditive C) := sorry%n%n-- Then, we need to define
what a colimit is%ndef Colimit (F : Functor C D) : D.Obj
:= sorry%n%n-- Then, we need to define what it means
for a functor to preserve colimits%ndef
PreservesColimits (F : Functor C D) : Prop :=%n ∀ (G :
Functor C D) (c : Colimit G), Colimit (Functor.comp F G)
= F.onObj c%n%n-- Finally, we can define what it means
for an object to be injective%ndef Injective (C :
Category) (X : C.Obj) : Prop :=%n PreservesColimits (
Yoneda C X)%n‘‘‘%n%nPlease note that this is a very
-----
rough sketch and is not complete or correct Lean4 code.
The definitions of ‘Yoneda‘, ‘Colimit‘, and ‘
PreservesColimits‘ are particularly difficult and would
require a lot of additional code.",
"gemini_pro_response": "def injective [additive] (C :
Category) : Prop := ∀ (F : Functor Cˆ{op} → Ab) (X : C),
preserves_finite_colimits F ↔ preserves_finite_colimits
(Yoneda.obj C X) = F"
_[∼]_
}
**Scores given to Problem 99: GPT-3.5 - 2. GPT-4 - 3. Gemini Pro**
- 2. GPT 4 avoided a direct autoformalization and
proceeded to define other concepts in category theory
giving it a score of 3. GPT 3.5 however, attempted a
direct autoformalization with Lean4 syntax and provided
partially sound logic warranting a score of 2. Bard,
similar to GPT 3.5, provided a partially correct direct
autoformalization with Lean4 syntax warranting a score of
2.
GRAPHING THE CORRECTION EFFORT FOR SEPARATE MODELS
Autoformalizing to Lean4 with GPT 3.5
30
Autoformalizing to Lean4 with GPT 4
0 1 2 3 4
Correction Effort
30
20
20
10
10
Correction Effort
Autoformalizing to Lean4 with Gemini Pro
30
20
10
Correction Effort
-----
| [
"Brando, Miranda",
"Shubhra, Mishra",
"Aryan, Gulati",
"Devanshu, Ladsaria",
"Jasdeep, Sidhu"
] | 2024-06-01T00:00:00 | null | false | 0 | 0 | [
"Lean"
] | http://arxiv.org/abs/2406.06555 | https://arxiv.org/abs/2406.06555 | https://www.semanticscholar.org/paper/a450d03987ad64e4aec89e4d965f1d6c4f0ccda3 |
An empirical study on challenging math problem solving with gpt-4 | N/A | null | ## MATHCHAT: CONVERSE TO TACKLE CHALLENGING MATH PROBLEMS WITH LLM AGENTS
**Yiran Wu[1], Feiran Jia[1], Shaokun Zhang[1], Hangyu Li[2], Erkang Zhu[3], Yue Wang[3],**
**Yin Tat Lee[4], Richard Peng[5], Qingyun Wu[1], Chi Wang[3]**
1Pennsylvania State University 2Imperial College London 3Microsoft Research Redmond
4University of Washington 5University of Waterloo
_{yiran.wu, feiran.jia, shaokun.zhang, qingyun.wu}@psu.edu,_
_{ekzhu, wang.yue, wang.chi}@microsoft.com, [email protected],_
[email protected], [email protected]
ABSTRACT
Employing Large Language Models (LLMs) to address mathematical problems
is an intriguing research endeavor, considering the abundance of math problems
expressed in natural language across numerous science and engineering fields.
LLMs, with their generalized ability, are used as a foundation model to build
AI agents for different tasks. In this paper, we study the effectiveness of utilizing LLM agents to solve math problems through conversations. We propose MathChat, a conversational problem-solving framework designed for math
problems. MathChat consists of an LLM agent and a user proxy agent which is
responsible for tool execution and additional guidance. This synergy facilitates a
collaborative problem-solving process, where the agents engage in a dialogue to
solve the problems. We perform evaluation on difficult high school competition
problems from the MATH dataset. Utilizing Python, we show that MathChat
can further improve previous tool-using prompting methods by 6%.
1 INTRODUCTION
With Large Language Models (LLMs) demonstrating remarkable proficiency in various tasks spanning diverse domains (Bubeck et al., 2023; Zhang et al., 2023c), they are deemed the potential
foundation model for building autonomous agents (Xi et al., 2023; Wang et al., 2023a; Zhang et al.,
2024). Especially, multi-agent collaboration is a promising direction with the growing complexity of tasks being studied, with the benefit of information sharing and collective decision-making
among different agents with specialized skills. It is compelling to explore the potential of LLMs in
tackling mathematical problems considering the crucial role of mathematics (Wigner, 1990) and the
prevalence of mathematical problems expressed in natural language throughout numerous scientific
and engineering disciplines.
In this work, we investigate the potential of solving challenging math problems through conversations between agents. Due to the complex nature of these problems, we usually need to decompose
them into multiple steps and can only make meaningful progress when all previous steps are correct.
We believe conversations (together with code execution) are an ideal format, which enables iterative
refining and debugging of each step. We propose MathChat, a conversational framework tailored
to chat-based LLMs, where the math problem is solved with a mock conversation between an LLMbased agent and a user proxy agent (See Figure 1 for an example and Figure 2 for the workflow). We
also study and incorporate effective prompting methods to instruct the LLM-based agent to solve
challenging math problems more effectively.
We evaluate MathChat with GPT-4 on the MATH dataset (Hendrycks et al., 2021), a comprehensive collection of mathematical problems derived from various competitions and educational levels.
We target the level-5 difficulty problems within this dataset, which primarily consist of challenging
high school competition problems that even college students find difficult. Recognizing code execution as a major boost in performance, we compare two methods that both use Python: Program
of Thoughts (PoT) prompt (Chen et al., 2022), and Program Synthesis prompt (Drori et al., 2022).
We also include a vanilla prompt for reference. The evaluation shows that MathChat can further
-----
**①** **②** 5 −11 User
Let's use Python… Problem: Let a = (−3 and **b = (** 1 There exist vectors and such that the line p **d** AgentProxy
−4[)] 28 [)][ .]
containing and can be expressed in the form a **b** **v = p + dt .** Furthermore, for a certain choice of, it is the case that d
for all points lying on the same side of that lies on, the distance between and is . Find .
v **a** **b** **v** **a** _t_ **d**
from sympy import Matrixa = Matrix([5, -3, vector form. Then we'll proceed with determining that satisfies the conditions given.
To find the general equation of the line passing through points and, we can use the We will first find the general equation of the line passing through points and using the d **a** **b** **a** **b** AgentLLM
-4])b = Matrix([-11, 1, formula:Python **v = a + (b −Pythona)t.** Let’s compute b − **a.**
28]) Now that we have the difference vector b − **a, let's substitute it into the general equation of**
b_minus_a = b - a the line containing points and . This gives us:a **b** **v = a + (b −** **a)t = a + dt**
print(b_minus_a) Since we have, let's find that satisfies the given conditions, i.e., when the distance a **d**
between and is . Using the distance formula, we getv **a** _t_
_t = ||_ **v −** **a** || = || (a + dt) − **a** || = || **dt** || . Dividing by, we get:t 1 = || **d** ||
The magnitude of is . Since is parallel to d 1 **d** (b − **a), we can find by normalizing the d**
d = b_minus_a / vector (b − **a): d =** **b −** **a** . Let’s calculate .d
b_minus_a.norm()print(d) **Python** || **b −** **a** ||
Now we have found the vector .
d
**> Code Execution** Matrix([[-16], [4], [32]]) Proxy User
Matrix([[-4/9], [1/9], [8/9]]) Agent
−4/9
The vector is indeed equal to d 1/9 . LLM
8/9 Agent
**① Prompt** So, the line containing the points and can be expressed as:
a **b**
5 −4/9
**v = a + dt =** −3 1/9
**② Problem** (
−4[)][ +][ t] 8/9
−4/9
And the vector is d 1/9 . **Final Answer!**
8/9
Figure 1: Example of a math problem-solving process with MathChat. The user proxy agent
initiates a conversation by sending the math problem to be solved an LLM agent with preset prompt).
From GPT-4’s response, the user proxy agent extracts all code and executes them sequentially. Valid
code from previous runs is recorded and will be executed together with the new code to reflect the
step-by-step reasoning progress of the model. The results will be returned to GPT-4 and GPT-4
will continue its problem-solving process. While GPT-4 solves this problem with only one turn
of user message in this example, our framework allows multi-turn conversations and additional
query handling, shown in Figure 3. The user proxy agent will do pattern-matching (in our case,
the appearance of \boxed{} containing a final answer) in the LLM agent’s response to determine
whether to end the conversation.
improve previous tool-using prompting methods by 6%, and it can reach 60% accuracy on half of
the categories while having competitive performance across all categories. We also demonstrate
the extensibility of MathChat with different prompts and different tools from our experiment. We
conduct a detailed analysis of the failure reasons of all the methods evaluated.
2 RELATED WORK
**LLM Agent Systems** In the domain of LLM Agent Systems, various implementations have
demonstrated the utility and diversity of multi-agent AI models. BabyAGI (BabyAGI, 2023) exemplifies an AI-powered task management system using multiple LLM-based agents with a static
agent conversation pattern, while CAMEL (Li et al., 2023) showcases a communicative agent framework emphasizing role-playing and autonomous cooperation. Further, research on Multi-Agent
Debate (Liang et al., 2023; Du et al., 2023) highlights the efficacy of agent debates in enhancing
divergent thinking and factuality in LLMs. MetaGPT (Hong et al., 2023), a specialized application,
demonstrates the use of GPTs in collaborative software development. AutoGen (Wu et al., 2023)
is an open-source framework for creating diverse LLM applications with customizable, conversable
agents using LLMs, human input, and tools.
**Prompting Methods** Creative ways of using LLMs to solve math problems have emerged
lately (Wang et al., 2022; Zhou et al., 2023; Zheng et al., 2023; Chen et al., 2021; Weng et al.,
2022; Wu et al., 2024). One particular endeavor is using LLMs to offload arithmetic calculations
and other basic operations involved in math problem-solving to programs (Drori et al., 2022; Chen
-----
Problem
Assistant Message QueriesExtract ExecutionQueries User Proxy Agent
No
Assistant Message foundquery No Recurrent Error?
Yes
User Message "Continue" Results Errors "Solve it yourself"
User Proxy
LLM Agent Agent User
MathChat Message
Figure 2: MathChat workflow: After a math problem is fed into MathChat, the user proxy agent
will initiate a conversation with the LLM agent to solve the problem. In each turn of interaction, the
user proxy agent processes the message from the LLM agent (Assistant Message) and responds with
a User Message. This process continues until the user proxy agent detects a certain pattern to end
the conversation. The process in the rectangular on the right-hand side of the figure shows the inner
workflow of the user proxy agent once an Assistant Message is received. It shows the functionality
of executing any tool-using queries, such as Python code. It is also responsible for giving different
instructions corresponding to different types of messages from the LLM agent (More in Appendix
A). To illustrate this feature of the proxy agent, we give a concrete example in Figure 3.
et al., 2022; Gao et al., 2022). Cumulative Reasoning (Zhang et al., 2023d; Song et al., 2024) decomposes tasks into smaller components, streamlines the solving process, and generates thoughts in
a cumulative manner. Plan & Solve prompting (Wang et al., 2023b) ask the LLM to first generate
a plan and then solve it accordingly. Other general methods used to improve reasoning can also be
applied to math problems: (1) Chain-of-thought (CoT) prompting (Wei et al., 2022; Kojima et al.,
2022) elicits step-by-step reasoning process from LLMs. (2) Another effective way is to prompt
LLMs to solve problems in a multi-stage manner (Dua et al., 2022; Press et al., 2022; Creswell
et al., 2022; Long, 2023; Paranjape et al., 2023; Yao et al., 2022; Yang et al., 2022; Long, 2023;
Besta et al., 2023). Least-to-most prompting (Zhou et al., 2022) and Decomposed prompting (Khot
et al., 2022) break down a complex problem into smaller subproblems, and the subproblems are
solved sequentially to reach the final solution. (3) Utilizing tools can significantly boost the performance of LLMs (Shen et al., 2024; Parisi et al., 2022). ReAct (Yao et al., 2022) and ART (Paranjape
et al., 2023) both use few-shot prompting to interleave step-by-step reasoning and tool-using. (4)
Self-consistency (Wang et al., 2022), built on top of CoT, samples several different reasoning paths
for a problem and selects the answer with the majority vote. Li et al. (2022) extends self-consistency
by training a verifier to verify the correctness of each step. By decomposing a problem-solving
process, Tree-of-Thoughts (Yao et al., 2023) proposes a set of thoughts for each intermediate step,
exploring and maintaining the most promising thoughts for sequential actions.
3 MA T HCH A T: A CONVERSATIONAL FRAMEWORK FOR MATH PROBLEM
SOLVING
In this section, we introduce MathChat, a conversational framework for math problem-solving.
**A conversational framework with user proxy agent. MathChat is a framework that simulates a**
mock conversation between an LLM agent (GPT-4 in our case) and a user proxy agent. Here a user
proxy agent is an agent playing the user’s role in conversations with the LLM agent. In MathChat,
the LLM agent and the user proxy agent work together to solve the math problem. The workflow
of this framework is presented in Figure 2. The user proxy agent takes a math problem to be solved
as input and would initiate a conversation with the LLM agent. The initial message from the user
proxy agent consists of an initial prompt and the problem to be solved. The initial prompt is used to
instruct the LLM agent to solve the problem collaboratively with the user (effectively the user proxy
agent in the MathChat system) in a certain desired manner. This framework is designed in this
conversational manner in order to leverage the chat-optimized feature of state-of-the-art LLMs, e.g.,
GPT-4. Another distinct benefit of this framework is that it enables multi-turn dialogues, which can
be particularly useful in addressing complex issues that require multi-step reasoning and tool using.
**Prompting and tool-using in MathChat. With proper modifications, effective prompting methods**
from existing research, such as CoT and tool-using, can be integrated into the MathChat frame
-----
Let's use Python… Problem: Fake gold bricks are made by covering concrete cubes with gold paint, so the cost of the paint is proportional
to their surface area while the cost of the concrete is proportional to their volume. If a 1 inch cube costs $1.30 to make while a 2 inch cube
costs $6.80, then how much would a 3 inch cube cost? User Proxy Agent
LLM LLM LLM LLM
…we can write the cost equation as Agent To continue solving the problem, we will first Agent … Let me correct Agent …I will correct the code Agent
follows: solve the system of linear equations to find the this and provide the and provide the proper
_cost = k1 * surface_area + k2 * volume_ values of and . Then, we will calculate the k1 _k2_ correct result. result.
…We are given: cost of a 3 inch cube using the cost equation. … …
1 inch cube costs $1.30 k1_value, k_values =
_surface_area_1 = volume_1 = (1)6 * (1)[3]_ [2] import sympy as spk1, k2 = sp.symbols('k1 k2', positive=True) k2_value = sp.solve((eq1, eq2), (k1, k2), sp.solve((eq1, eq2), (k1, k2), dict=True)k1_value =
_1.3 = k1 * surface_area_1 + k2 * volume_1_ … dict=True)User k_values[0][k1]
2 inch cube costs $6.80 surface_area_2 = 6 * (2)**2 … k2_value =
_surface_area_2 = 6 * (2)[2]_ volume_2 = (2)**3 cost_3 = k_values[0][k2]
_volume_2 =(2)[3]_ eq2 = sp.Eq(6.8, k1 * surface_area_2 + k2 k1_value[k1] * …
_6.8 = k1 * surface_area_2 + k2 * volume_2_ - volume_2) surface_area_3 + cost_3 = k1_value *
… We will solve these equations to find k1_value, k2_value = sp.solve((eq1, eq2), k2_value[k2] * surface_area_3 +
k1 and k2. Afterwards, we will find the (k1, k2)) volume_3 k2_value *
cost of the 3 inch cube using the same surface_area_3 = 6 * (3)**2 … volume_3
cost equation. volume_3 = (3)**3 …
_surface_area_3 =volume_3 = (3)6 * (3)[3]_ [2] cost_3 = k1_value* surface_area_3 + k2_value * volume_3 Let's now run the corrected code to get the cost of a 3-inch Let's run the corrected code to get the cost of a
_cost_3 = k1 *surface_area_3 + k2 * volume_3_ … cube. 3-inch cube.
**① Ask to continue** **② Return valid result** **③ Return error** **Correct!**
User Continue. Please keep solving the ValueError: not enough
AgentProxy problem until you need to query. (If you get to the answer, put it in \boxed{}.) 54k1 + 27k2 values to unpack (expected 2, got 1) 18.90
Figure 3: An example demonstrating how the user proxy agent handles different types of messages
received from GPT-4 in MathChat. Specifically, the user proxy agent may respond in the following
ways: (1) asking the LLM agent to continue because no code block (i.e., query) is detected; (2)
returning the valid results from code execution; and (3) returning the error message from Python
execution. Note that GPT-4 may change the query if the old code is undesired based on the messaged
from the user proxy agent. In the last step, GPT-4 corrects the query, and the final result is returned.
Figure 4: The prompt used in the initial message of the user proxy agent in MathChat. It instructs
Let's use Python to solve a math problem. **①** **① Tool-using**
Query requirements: Coding format
You should always use the 'print' function for the output and use fractions/radical
forms instead of decimals.
You can use packages like sympy to help you.
You must follow the formats below to write your code:
```python
**② Strategy**
First state the key idea to solve the problem. You may choose from three ways to solve the problem: **②**
**Selection**
Case 1: If the problem can be solved with Python code directly, please write a program to solve it. You can
enumerate all possible arrangements if needed.
Case 2: If the problem is mostly reasoning, you can solve it by yourself directly. **Multi-step tool-**
Case 3: If the problem cannot be handled in the above two ways, please follow this process:
**using and**
1. Solve the problem step by step (do not over-divide the steps). **reasoning**
2. Take out any queries that can be asked through Python (for example, any
calculations or equations that can be calculated). Step by step
3. Wait for me to give the results. Facilitate dialogue
4. Continue if you think the result is correct. If the result is invalid or unexpected, Error handeling
please correct your query or reasoning.
After all the queries are run and you get the answer, put the answer in \boxed{}. **③** **③ Final Answer**
the LLM agent to solve a problem collaboratively with the user proxy agent in a certain way.
work. Specifically, for the prompt in the initial message, we aggregate multiple effective prompting
techniques to instruct the LLM agent. We present the designed prompt in Figure 4, which consists
of three main components.
- Tool-using Prompt: This component prompts the LLM to use Python programming in the correct
format to tackle the problem. We use the ‘query requirement’ subsection to specify the coding
format so that the user proxy agent can parse the code and return the corresponding results.
-----
- Problem-Solving Strategy Selection Prompt: This component instructs the LLM agent to select
from three possible problem-solving strategies and to perform multi-stage reasoning and toolusing in the last strategy. The problem-solving strategies include the following three cases, which
cover the most effective strategies from existing literature on math problem-solving. (Case 1)
_Write a Python program to solve the problem directly. This corresponds to single-stage tool-using_
methods similar to Gao et al. (2022); Drori et al. (2022); Chen et al. (2022). (Case 2) Solve the
_problem directly without Python. This strategy allows GPT-4 to exercise its inherent reasoning_
capacity to solve the problem at hand. (Case 3) Solve the problem step by step and use Python
_to help with math operations. If the first two ways are not suitable, we ask the model to choose_
this way to solve the problem. We craft a zero-shot version of the multi-step tool-using prompt
that allows the model to flexibly interleave between multi-step reasoning and Python code, similar
to Yao et al. (2022); Paranjape et al. (2023); Schick et al. (2023). In this case, we also ask the
model to handle errors and unexpected results from the run of programs Ni et al. (2023).
- Final Answer Encapsulation Prompt: This component of the prompt instructs the LLM agent to
enclose the final answer in \boxed{}, which will be used as an indicator to end the conversation. This interaction between the LLM agent and the user proxy agent will not be ended until
_\boxed{} is detected or max rounds of conversations are reached._
We acknowledge that there could be alternative ways to design the prompt. Fortunately, it is fairly
easy to refine the prompt, for example, further enabling the usage of Wolfram Alpha in addition to
Python, in our framework. We perform an empirical evaluation accordingly to test two alternative
versions of the prompt in Section 5.2.
4 EVALUATION
**Dataset. We perform evaluations on all the level-5 (the highest difficulty level) problems from the**
test set of MATH dataset Hendrycks et al. (2021). Compared to other datasets for mathematical
problems such as GSM8k Cobbe et al. (2021), the level-5 problems are much more challenging
and include the application of theorems and complex equation derivation. The MATH dataset has 7
categories of problems: Prealgebra, Algebra, Number Theory, Counting and Probability, Geometry,
Intermediate Algebra, and Precalculus. In our evaluation, we remove Geometry from the evaluation
to make it consistent with previous work Drori et al. (2022) (additional explanation in Appendix B).
**Evaluated Methods. Most previous work uses few-shot examples to elicit the reasoning of LLMs**
and tool-using. It is important to select similar examples to the unanswered problem, and then
annotate the examples to cover all the cases that the LLMs might encounter. A considerable amount
of effort and careful consideration are required in this process. For example, Khot et al. (2022);
Zhou et al. (2022) relies on elaborate examples to showcase the patterns, Paranjape et al. (2023)
maintains an example library to choose examples. Note that these methods use elementary math
problems and it requires even more effort to prepare and choose the examples needed for challenging
math problems. On the other hand, multiple existing studies OpenAI (2023); Bubeck et al. (2023)
reveal GPT-4’s remarkable capacity to follow instructions. Thus, we are interested in zero-shot
prompting techniques that could enhance math-solving of GPT-4, without any example selection and
annotations. Following this criterion, we evaluate our MathChat framework with the introduced
prompt and the following methods which are all zero-shot methods: vanilla prompt, Program of
Thoughts Chen et al. (2022), and the Program Synthesis prompt from Drori et al. (2022).
1. Vanilla prompting: GPT-4 can perform CoT reasoning without few-shot examples. To evaluate
GPT-4’s performance on solving the problem directly, we use a default prompt adapted from
the few-shot prompt in MATH dataset: "Solve the problem carefully. Put the
final answer in \boxed{}. _{Problem}"._
2. Program of Thoughts (PoT): We use the zero-shot PoT prompting from Chen et al. (2022),
which asks a model to write a Solver function to solve a problem and return the final answer
directly.
3. Program Synthesis (PS) prompting: Similar to PoT, the Program Synthesis (PS) prompting method Drori et al. (2022) uses a prompt to ask the model to write a program to solve a problem: "Write a program that answers the following
question: _{Problem}"_
**Evaluation Details. Hyperparameters plays critical role in determining model performance Zhang**
et al. (2023a;b). To ensure a fair comparison, we use the default configurations from the OpenAI API
-----
on GPT-4 for all methods. In MathChat, we allow a max round of 15 messages between GPT-4
and the user proxy agent. The agent will explicitly ask GPT-4 to solve each step by itself if it detects
errors from 3 consecutive executions. To avoid extremely long responses from the user proxy agent,
the agent will replace any result that exceeds 600 tokens with a warning text in the user message to
ask the GPT-4 to revise the previous code. We manually go through the answer of all the methods
to count all the correct answers. For vanilla prompt, Program Synthesis, and MathChat, we ask
GPT-4 to enclose the final answer in \boxed{}, so only the answers in the box will be extracted.
For PoT, we follow the original paper to take the return of the Solver function as the final answer.
5 RESULTS
5.1 MAIN RESULTS
We perform an evaluation on six categories of level-5 problems from the MATH dataset. We report
the problem-solving accuracy of different methods in each category in Table 1. Compared to vanilla
prompting, which shows the native capability of GPT-4, using Python with PoT or PS improves the
overall accuracy by around 10%. We can see this improvement mostly in the categories that involve
more number manipulations (Counting & Probability and Number Theory) and more challenging
categories (Intermediate Algebra and Precalculus). For Algebra and Prealgebra, however, PoT and
PS have little improvement or even lead to lower accuracy. Compared with PoT and PS, MathChat
can further improve the total accuracy by around 6%, and have competitive performance across
all the categories. It is worth highlighting that MathChat improves the accuracy in the Algebra
category over other methods by around 15%. Considering all the methods, Intermediate Algebra
and Precalculus can only be solved with a low accuracy rate of around 20%. More than half of the
problems from the other categories can be solved correctly by MathChat.
|Problem Count|Algebra C.Prob I.Alg N.Theory Prealg Precalc Total 307 123 280 154 193 135 1192|
|---|---|
|MathChat PoT PS Vanilla|59.93% 52.03% 17.85% 60.39% 60.10% 19.26% 44.71% 42.67% 50.41% 17.50% 54.55% 52.33% 16.30% 37.67% 43.32% 44.71% 20.36% 61.03% 55.96% 18.52% 39.60% 46.58% 25.20% 2.86% 28.57% 54.92% 7.41% 28.69%|
|---|---|
Table 1: Accuracy on all the problems with difficulty level-5 from different categories of the MATH
dataset with different methods.
5.2 ADDITIONAL EVALUATION ON MA T HCH A T WITH ALTERNATIVE PROMPTS
|Problem Count|Algebra C.Prob I.Alg N.Theory Prealg Precalc Total 50 50 50 50 50 50 300|
|---|---|
|MathChat w/ Two-tools MathChat w/ Python MathChat PoT PS Vanilla|33 22 6 27 29 10 127 26 19 7 22 31 13 118 30 24 8 34 28 10 134 20 19 9 24 24 7 103 17 19 12 31 26 5 110 26 13 1 17 21 1 79|
|---|---|
Table 2: Additional evaluation of MathChat with two alternative prompts. 50 problems are sampled from each problem category for this evaluation. MathChat w/Two-tools and MathChat w/
Python are two alternative prompts.
MathChat allows easy incorporation of different prompts and tools. We perform an additional
evaluation to test two alternative initial prompts with MathChat to demonstrate its extensibility.
(1) A simplified prompt with Python: In this alternative, we only keep the ‘query requirements’
subsection for python coding format and the step-by-step tool-using (i.e., case 3) from the default
prompt. (2) A simplified prompt with Python and Wolfram Alpha: In this alternative, on top of
-----
In how many ways can one write 1, 2, The number ( 2 + 3)[3]can be written in the
3, 4, 5, and 6 in a row so that given form a 2 + b 3 + c 6, where,, and are a b _c_
any number in the row, all of its divisors integers. What is a + b + c?
(not including itself) appear to its left? Problem
Problem Key idea: Expand the expression ( 2 + 3)[3] LLM
using binomial theorem … Agent
… import …
We can solve this problem by enumeration a, b, c = symbols('a b c')
since there aren't many cases to go expr = (sqrt(2) + sqrt(3))**3
through. LLM expan_expr = expr.expand()
Agent
a_term = 2 * expan_expr.coeff(sqrt(2), 1)
Consider the fact that we need to put '1' at b_term = 2 * expan_expr.coeff(sqrt(3), 1)
the beginning of the row since c_term = expanded_expr.coeff(6, 1)
there are no divisors of '1'. Now we have print(a_term, b_term, c_term)
the base for all other numbers to fit.
Now let's enumerate the possibilities by
placing these numbers in a valid sequence: Return: 22 18 0 User Proxy Agent
…
The answer is a + b + c = 22 + 18 + 0 = 40 AgentLLM
Type 2. the `2 * … ` in the code came out of not
Type 1. The plan is to enumerate all where.
possible outcomes, which includes
5*4*3*2*1=120 sequences. The space is If the 2s for both terms are removed, the
big and answer can be corrected to:
_a + b + c = 11 + 9 + 0 = 20_
Figure 5: One example is selected for each of the first two failures. Type 1 failure: in the first
problem, the LLM agent fails to give a plausible plan. It chooses to enumerate all sequences, and it
does not use tools to help with it. Type 2 failure: the second problem shows that the model fails to
give the correct code to solve the problem, while it follows the problem requirements and the overall
direction is correct. With minor changes to the code, the final answer can be correct.
alternative (1), we add Wolfram Alpha, a computational engine, as an additional tool for the LLM
agent to choose from. Details of these two alternative prompts are in Appendix B. We perform an
evaluation on randomly sampled 50 examples from each of the six problem categories. We also
include results from other methods on the sample problems for comparison in Table 2. MathChat
still performs better than other methods with the two newly crafted prompts. With MathChat, the
step-by-step prompt that allows both Python and Wolfram performs the best on Algebra, while the
new prompt with only Python solves the most problems on Prealgebra and Precalculus, but has a
worse performance on Number Theories. Overall, MathChat with the default prompt still performs
the best.
6 FAILURE ANALYSIS
6.1 FAILURE REASONS
We first summarize the failure cases according to the reasons for failure, based on the systematic
math problem-solving process established by George P´olya Polya (2004). The process consists
of (1) understanding the problem; (2) devising a plan; (3) executing the plan; (4) reviewing and
extending. We observe failures of the following three main types. We give one example each for the
two types of failures in Figure 5. More example are provided in Appendix D.
**Type 1. Failure to devise or select an appropriate plan or path to solve the problem. This type**
encompasses cases where GPT-4 fails to provide a proper way to approach the problem. In these
instances, the answer is consistently incorrect, even though each individual step of the calculation
is accurate. Failure cases of this type are typically tricky problems that even math experts find
challenging.
**Type 2. Failure to flawlessly execute the devised plan. Math problem-solving requires rigorous**
and precise reasoning steps. A minor error in calculation or symbol manipulation could be fatal
and lead to a wrong answer. This type of error is considered ‘minor’ because they are easier to be
fixed. This type of error contributes to a fair amount of failures, where the overall direction of the
-----
solves for b correctly, but gets an ValueError when using the program to extract m and n.
−b ± 5
If x[2] + bx + b + 3 = 0 has roots of the form 2, where b > 0, then b = m + _n for positive integers m, n. Find m + n._
Problem
Vanilla PoT PS
… To solve the given problem, we can follow
… def solver(): these steps: 1. Use the quadratic formula for
Thus, we have the equation: … the given equation. 2. Compare the roots in the
Now, we solve for using the quadratic b + 3 = _[b][2][ −5]4_, 0 =b _b[2]_ −4b −17 equating the product of our roots, root1 and root2, with the product # We can find the value of b by given form with the ones obtained from the quadratic formula. 3. Solve the resulting equation for b.
formula: obtained from Vieta's formulas: Here's the python code to find the value of b:
_b =_ −(−4) ± (−4)2 ⋅1[2] −4 ⋅1 ⋅(−17), product_roots = root1 * root2vieta_product = -b - 3# Now we'll solve the equation ② # Print the positive bfor sol in solution_b:…
68 product_roots = vieta_product for b: if sol[0] > 0:
_b = [4 ±]_ 2 **①** equation = sp.Eq(product_roots, positive_b = sol[0]
vieta_product) print("Value of b:", positive_b)
Since we are given that is positive, we b solution = sp.solve(equation, b)
can disregard the negative square root, # Since b > 0, we'll take the # Express b as m + sqrt(n)
68 positive solution: m, n = symbols('m n') **④**
and we have: Thus, m = 2 and b = n = 68[4 +] 2, so m + n = 70. solution if sol > 0][0]b_value = [sol for sol in ③ m_expr, n_expr = solve(Eq(positive_b, m + sqrt(n)), (m, n))
… …
① (−4)[2] is missing ② should be b + 3 ③ TypeError! ④ ValueError!
Figure 6: An example where MathChat is correct and others fail. All other methods fail due to
Type 2 failure. 1. Vanilla prompt: when calculating b, didn’t include −4[2]. 2. PoT: it first calculates
vieta product wrong, even is this is corrected, another TyperError will occur. 3. PS: it
problem-solving is correct, but one mistake in a basic derivation leads to the wrong answer. Note
that an error for Python execution is also included in this type, where GPT-4 fails to write a runnable
code leading to the error.
**Type 3. Other technical errors. There are other technical errors causing the failure. One example**
of such failure is lack of information due to the removal of ASY code.
6.2 FAILURE CASES USING DIFFERENT METHODS ON GPT-4
In Table 3, we present the frequency of successful outcomes for each method (represented in each
row), while all other methods fail, categorized according to different problem instances. This table
serves to highlight the distinct advantage that a particular method exhibits across various problem
categories. Similarly, in Table 4, we summarize the frequency of instances where one method fails
while all other methods succeed. A high number in this table signifies the unique disadvantage of
the method in question.
These statistics demonstrate the robustness of MathChat in comparison to other methods.
MathChat leverages conversation to enhance error correction when utilizing external tools, thereby
hypothesizing a reduction in failures within the third type.
|MathChat PoT PS Vanilla|27 8 21 13 6 9 11 9 19 6 3 5 12 6 22 11 10 8 12 4 5 3 10 3|84 53 69 37|
|---|---|---|
Algebra C.Prob I.Alg N.Theory Prealg Precalc Total
Table 3: The number of problems where one method succeeds, and all the other methods fail (the
higher the better for the concerned method in each row).
We take one example from each table to analyze the failures of these methods. We first take an
Algebra problem that MathChat succeeds but others fail to analyze the failures of other methods
(Figure 6) For this problem, other methods fail to execute this plan without any mistakes, causing
the second type of failure. While vanilla prompting has a calculation error, the other two methods
get execution errors from running the code. We run these methods three more times and they still
fail to solve the problem. From Table 4, we take the only Precalculus instance that MathChat is
wrong while all the other methods are correct. Through investigating, we find that MathChat gives
-----
Algebra C.Prob I.Alg N.Theory Prealg Precalc Total
|MathChat PoT PS Vanilla|6 2 0 5 4 1 22 5 0 6 18 2 17 5 1 5 14 0 16 19 11 28 19 5|18 53 42 98|
|---|---|---|
Table 4: The number of problems where one method fails and all the other methods succeed (the
lower, the better for the concerned method in each row).
longer solutions to the problem than all the other methods, and also contains Type 2 failures. This
problem might indicate a potential correlation between the accuracy and length of responses. We
present more details in investigating this possible correlation and also the Precalculus example in
Appendix D.
7 SUMMARY AND FUTURE WORK
7.1 SUMMARY
In this paper, we introduce MathChat, a conversational framework to solve math problems with the
collaboration of an LLM agent and a user proxy agent. MathChat is designed for chat-optimized
models like GPT-4, and it is extensible to be used with different prompts and different tools with
minimal effort. Based on the framework, we also derive a prompt that aggregates previous prompting
techniques to be used on MathChat. Our evaluation of level-5 problems from the MATH dataset
demonstrates the effectiveness of MathChat to solve more complex and challenging problems.
Despite its improvements over previous methods, the results show that complex math problems is
still challenging for recent powerful LLMs, like GPT-4, even with help from external tools. We
discuss potential directions to further improve math problem-solving below.
7.2 ENHANCED AGENT SPECIALIZATION IN PROBLEM SOLVING
_The Society of Mind Minsky (1988) posits that intelligence emerges from the interaction of relatively_
simple agents. In MathChat, where two agents collaborate to solve math problems, the behavior
of the LLM agent, guided by specific instructions for task completion and decision-making, shows
significant variance. To enhance consistency and effectiveness, it could be beneficial to decompose
this process into specialized tasks, each handled by a dedicated agent. One agent, for instance, could
focus on comprehending and developing initial solutions, while another evaluates the most suitable
problem-solving strategy for the given problem.
Other than decomposing the solving process, it is possible to categorize problems by type, difficulty
level, or other aspects, and accordingly select the most effective agent (prompting method) for each
category. Our analysis in Section 6.2 demonstrates that various methods exhibit distinct advantages
depending on the problem type. This approach is akin to the Mixture of Experts model Shazeer
et al. (2017), where specific prompting strategies are used in place of sub-neural networks, and it
also aligns with the concept of prompting chaining Wu et al. (2022), which involves classifying a
task under various scenarios for targeted resolution.
7.3 ASSISTANCE IN HUMAN PROBLEM-SOLVING
While LLMs is showing great potential to aid in human problem-solving, we recognize that much
work remains in developing a reliable LLM-based problem-solving assistant. When conducting
failure analysis in Section 6, we can spot calculation errors in LLM responses easily but may struggle
with identifying logical or factual inaccuracies, especially with unfamiliar concepts. This could lead
to potential misinformation, especially for students who are learning new concepts and have weaker
judgements. Although our evaluation indicates that incorporating Python enhances LLM problemsolving abilities, relying solely on Python or LLMs has limitations, as Python solutions (such as
brute-force or simulations) may not suit human learning needs.
A possible mitigation would be to verify each step of the solving process with external tools and
established knowledge. When the LLM generates each intermediate step to solve a problem, Python
-----
can be used to check for calculation errors in the step, and external databases can be consulted to
validate any theorems mentioned.
REFERENCES
[BabyAGI. Github — babyagi. https://github.com/yoheinakajima/babyagi, 2023.](https://github.com/yoheinakajima/babyagi)
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna
Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, et al.
Graph of thoughts: Solving elaborate problems with large language models. _arXiv preprint_
_arXiv:2308.09687, 2023._
S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general
intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared
Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large
language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W Cohen. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. arXiv preprint
_arXiv:2211.12588, 2022._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Antonia Creswell, Murray Shanahan, and Irina Higgins. Selection-inference: Exploiting large language models for interpretable logical reasoning. arXiv preprint arXiv:2205.09712, 2022.
Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard Tang, Albert Lu, Elizabeth Ke, Kevin Liu,
Linda Chen, Sunny Tran, Newman Cheng, et al. A neural network solves, explains, and generates
university math problems by program synthesis and few-shot learning at human level. Proceed_ings of the National Academy of Sciences, 119(32):e2123433119, 2022._
Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improving factuality and reasoning in language models through multiagent debate. _arXiv preprint_
_arXiv:2305.14325, 2023._
Dheeru Dua, Shivanshu Gupta, Sameer Singh, and Matt Gardner. Successive prompting for decomposing complex questions. arXiv preprint arXiv:2212.04092, 2022.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and
Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874, 2021._
Sirui Hong, Xiawu Zheng, Jonathan Chen, Yuheng Cheng, Ceyao Zhang, Zili Wang, Steven
Ka Shing Yau, Zijuan Lin, Liyang Zhou, Chenyu Ran, et al. Metagpt: Meta programming for
multi-agent collaborative framework. arXiv preprint arXiv:2308.00352, 2023.
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish
Sabharwal. Decomposed prompting: A modular approach for solving complex tasks. arXiv
_preprint arXiv:2210.02406, 2022._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022.
Guohao Li, Hasan Abed Al Kader Hammoud, Hani Itani, Dmitrii Khizbullin, and Bernard Ghanem.
Camel: Communicative agents for ”mind” exploration of large scale language model society,
2023.
-----
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. On the
advance of making language models better reasoners. arXiv preprint arXiv:2206.02336, 2022.
Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng
Tu, and Shuming Shi. Encouraging divergent thinking in large language models through multiagent debate, 2023.
Jieyi Long. Large language model guided tree-of-thought. arXiv preprint arXiv:2305.08291, 2023.
Marvin Minsky. Society of mind. Simon and Schuster, 1988.
Ansong Ni, Srini Iyer, Dragomir Radev, Ves Stoyanov, Wen-tau Yih, Sida I Wang, and Xi Victoria Lin. Lever: Learning to verify language-to-code generation with execution. arXiv preprint
_arXiv:2302.08468, 2023._
OpenAI. Gpt-4 technical report, 2023.
Bhargavi Paranjape, Scott Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and
Marco Tulio Ribeiro. Art: Automatic multi-step reasoning and tool-use for large language models.
_arXiv preprint arXiv:2303.09014, 2023._
Aaron Parisi, Yao Zhao, and Noah Fiedel. Talm: Tool augmented language models. arXiv preprint
_arXiv:2205.12255, 2022._
George Polya. How to solve it: A new aspect of mathematical method. Number 246. Princeton
university press, 2004.
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A Smith, and Mike Lewis. Measuring
and narrowing the compositionality gap in language models. arXiv preprint arXiv:2210.03350,
2022.
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer,
Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to
use tools. arXiv preprint arXiv:2302.04761, 2023.
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton,
and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer.
_arXiv preprint arXiv:1701.06538, 2017._
Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. Hugginggpt: Solving ai tasks with chatgpt and its friends in hugging face. Advances in Neural Information
_Processing Systems, 36, 2024._
Linxin Song, Jiale Liu, Jieyu Zhang, Shaokun Zhang, Ao Luo, Shijian Wang, Qingyun Wu, and
Chi Wang. Adaptive in-conversation team building for language model agents. arXiv preprint
_arXiv:2405.19425, 2024._
Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai
Tang, Xu Chen, Yankai Lin, et al. A survey on large language model based autonomous agents.
_arXiv preprint arXiv:2308.11432, 2023a._
Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim.
Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language
models. arXiv preprint arXiv:2305.04091, 2023b.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency
improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny
Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint
_arXiv:2201.11903, 2022._
Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, and Jun Zhao. Large language models are reasoners with self-verification. arXiv preprint arXiv:2212.09561, 2022.
-----
Eugene P Wigner. The unreasonable effectiveness of mathematics in the natural sciences. In Math_ematics and science, pp. 291–306. World Scientific, 1990._
Qingyun Wu, Gagan Bansal, Jieyu Zhang, Yiran Wu, Shaokun Zhang, Erkang Zhu, Beibin Li,
Li Jiang, Xiaoyun Zhang, and Chi Wang. Autogen: Enabling next-gen llm applications via multiagent conversation framework. arXiv preprint arXiv:2308.08155, 2023.
Tongshuang Wu, Ellen Jiang, Aaron Donsbach, Jeff Gray, Alejandra Molina, Michael Terry, and
Carrie J Cai. Promptchainer: Chaining large language model prompts through visual programming. In CHI Conference on Human Factors in Computing Systems Extended Abstracts, pp. 1–10,
2022.
Yiran Wu, Tianwei Yue, Shaokun Zhang, Chi Wang, and Qingyun Wu. Stateflow: Enhancing llm
task-solving through state-driven workflows. arXiv preprint arXiv:2403.11322, 2024.
Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe
Wang, Senjie Jin, Enyu Zhou, et al. The rise and potential of large language model based agents:
A survey. arXiv preprint arXiv:2309.07864, 2023.
Jingfeng Yang, Haoming Jiang, Qingyu Yin, Danqing Zhang, Bing Yin, and Diyi Yang. Seqzero:
Few-shot compositional semantic parsing with sequential prompts and zero-shot models. arXiv
_preprint arXiv:2205.07381, 2022._
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao.
React: Synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629,
2022.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. arXiv
_preprint arXiv:2305.10601, 2023._
Shaokun Zhang, Feiran Jia, Chi Wang, and Qingyun Wu. Targeted hyperparameter optimization with
lexicographic preferences over multiple objectives. In The Eleventh international conference on
_learning representations, 2023a._
Shaokun Zhang, Yiran Wu, Zhonghua Zheng, Qingyun Wu, and Chi Wang. Hypertime:
Hyperparameter optimization for combating temporal distribution shifts. _arXiv preprint_
_arXiv:2305.18421, 2023b._
Shaokun Zhang, Xiaobo Xia, Zhaoqing Wang, Ling-Hao Chen, Jiale Liu, Qingyun Wu, and
Tongliang Liu. Ideal: Influence-driven selective annotations empower in-context learners in large
language models. arXiv preprint arXiv:2310.10873, 2023c.
Shaokun Zhang, Jieyu Zhang, Jiale Liu, Linxin Song, Chi Wang, Ranjay Krishna, and Qingyun
Wu. Training language model agents without modifying language models. _arXiv preprint_
_arXiv:2402.11359, 2024._
Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew Chi-Chih Yao. Cumulative reasoning with
large language models. arXiv preprint arXiv:2308.04371, 2023d.
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. Progressive-hint prompting
improves reasoning in large language models. arXiv preprint arXiv:2304.09797, 2023.
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun Luo, Zipeng Qin, Shaoqing Lu, Anya Jia,
Linqi Song, Mingjie Zhan, et al. Solving challenging math word problems using gpt-4 code
interpreter with code-based self-verification. arXiv preprint arXiv:2308.07921, 2023.
Denny Zhou, Nathanael Sch¨arli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Olivier Bousquet, Quoc Le, and Ed Chi. Least-to-most prompting enables complex
reasoning in large language models. arXiv preprint arXiv:2205.10625, 2022.
-----
A SUPPLEMENTARY DETAILS ON THE USER PROXY AGENT
The user proxy agent in MathChat takes a problem and put it in a message with an initial prompt,
and sends the message to the LLM agent. Then the agent is responsible for extracting and executing
queries and also providing additional guidance. Here are all functionalities of the user proxy agent
(the workflow is shown in Figure 2):
1. Extract Queries: The user proxy agent needs to match the pattern specified in the initial
message to extract all tool-using queries. With our designed prompt, the agent matches all
code blocks in the message and extracts the code.
2. ”Continue”: If no query is detected in the message, the agent will send this message to the LLM agent: "Continue. Please keep solving the problem
until you need to query. (If you get to the answer, put it in
_\boxed{}.". This asks the agent to keep solving the problem and reminds it to end the_
conversation by putting the answer in the box.
3. Query Execution: Any tool-using queries extracted will be executed sequentially. For Python,
we set the time limit to be 5 seconds for execution. As shown in Figure 1, the previous valid
code is recorded. All the execution results will be concatenated sequentially (including errors).
4. Recurrent Error detection: If LLM agent sends 3 consecutive errors, the user proxy
agent will replace the third error message with this message: "Please revisit
the problem statement and your reasoning. If you think this
step is correct, solve it yourself and continue the next step.
Otherwise, correct this step.". To avoid sticking to this error, the LLM agent is
asked to solve this step without tools and move on.
5. Repetitive results: This is not shown in the workflow, but the agent also detects another situation where the LLM agent gives the same tool-using query from the last one or the result
is the same as the last query. Then the message is appended to the execution result to remind
the agent to avoid giving the same queries: "Your query or result is same from
the last, please try a new approach.".
6. Long **query** **results:** It is possible that LLM agent requires a query result
that is too long to be passed back (such as long results from the print function in a for loop in Python). The proxy agent will replace any query result that is longer than 2000 chars (approximately 600 tokens) with this message: "Your requested query response is too long. You might have
made a mistake. Please revise your reasoning and query.".
In MathChat, if the tool-using query and the end indicator are detected in the same message, the
result from the query will be returned, and the conversation will continue. This is to prevent early
stops where the LLM agent predicts the execution result and puts it in a box other than waiting for
the result.
B SUPPLEMENTARY DETAILS ON EXPERIMENT SETTINGS
Rational in removing the geometry problems from testing: Most geometry problems from this
dataset contain an Asymptote code to plot the figure. But the currently available version of GPT-4
cannot accept image input. If the raw code is included, it can leak information to the model through
exact numbers for the coordinates. Taking these issues into consideration, we skip the evaluation on
Geometry problems and remove ASY code from all the other categories (though this could result in
a lack of enough information for some problems). The correct answer to each problem is deterministic and is enclosed in \boxed{} in the dataset as ground truth (but not disclosed to the methods
solving the problem).
[The code is in this GitHub repository. In our experiment, we use the default configuration from](https://github.com/yiranwu0/MathChat)
[OpenAI, specifically temperature=1, and max token=inf (See OpenAI API Reference for](https://platform.openai.com/docs/api-reference/chat/create)
more details). We use the system message ”You are a helpful assistant” for vanilla prompt, PS, and
MathChat. For PoT, we do not add this system message, since our evaluation shows that PoT
-----
Figure 7: The Python prompt used on MathChat from Section 5.2.
Let's use Python to solve a math problem.
Query requirements:
You should always use the 'print' function for the output and use fractions/radical forms instead of decimals.
You can use packages like sympy to help you.
You must follow the formats below to write your code:
```python
# your code
```
Please follow this process:
1. Solve the problem step by step (do not over-divide the steps).
2. Take out any queries that can be asked through Python (for example, any calculations or equations that
can be calculated).
3. Wait for me to give the results.
4. Continue if you think the result is correct. If the result is invalid or unexpected, please correct your query or
reasoning.
After all the queries are run and you get the answer, put the answer in \\boxed{}.
without system message has a better performance. We discuss the effect of system message below
in Section C.
Here is the prompts for PoT Chen et al. (2022), PS Drori et al. (2022), and the additional two prompts
we designed:
- Program of Thoughts (PoT). See Figure 10. The whole prompt uses the Python code format,
where information such as the problem or instructions is in the comments.
- Program Synthesis (PS). The prompt for PS is "Write a program that answers
the following question: _{Problem}". Since the definition of ”program” is un-_
clear and sometimes the LLM agent won’t write code to solve the problem, we add the keyword ’Python’ in front of ’program’. After the message is returned, we used the proxy agent
to return the result (by default, GPT-4 would return the code in the code block). Then we
send another message to the model with the Python execution result and ask it to enclose the
final answer: {Return from Python}. Please put the final answer in
_\boxed{}. See Figure 9 for an example of the whole process._
- Python prompt (w/ MathChat). See Figure 7.
- Two-tools prompt (w/ MathChat). See Figure 8.
C SUPPLEMENTARY EXPERIMENTS AND RESULTS
We further evaluate a vanilla few-shot prompt, PoT with and without system message, and Vanilla
prompt with and without system message on the randomly selected 50 problems from each category
and present the results in Figure 5.
In the few-shot prompt, we randomly select 3 level-5 problem-solution pairs from the train set.
These examples are selected from each category and are used for all the problems from that category. The vanilla few-shot prompt starts with "Solve the problem carefully. Put
the final answer in \boxed{} just like the vanilla prompt, and then three "Problem:
... Solution: ..." pairs are attached. Compared with the vanilla prompt, adding three
additional examples does not make any obvious difference, and the overall performance is slightly
worse.
From our experiment, we also notice that the system message affects the performance of the LLM
agent. However, the impact significantly differs between methods. As shown in Table 5, using a
system message is crucial for the Vanilla prompt: adding the system message doubles the success
rate compared to the one without a system message. However, for PoT, adding the system message
-----
Let's use two tools (Python and Wolfram alpha) to solve a math problem.
Query requirements:
You must follow the formats below to write your code:
For Wolfram Alpha:
```wolfram
# your wolfram query
```
For Python:
```python
# your code
```
When using Python, you should always use the 'print' function for the output and use fractions/radical forms
instead of decimals.
You can use packages like sympy to help you.
Please follow this process:
1. Solve the problem step by step (do not over-divide the steps).
2. Take out any queries that can be asked through Python or Wolfram Alpha and select the most suitable tool to be
used (for example, any calculations or equations that can be calculated).
3. Wait for me to give the results.
4. Continue if you think the result is correct. If the result is invalid or unexpected, please correct your query or
reasoning.
Figure 8: The Two-tools prompt used on MathChat from Section 5.2. The added requirement
compared to the Python prompt is highlighted in yellow. This prompt allows the LLM agent to
choose from Python or Wolfram Alpha.
only slightly increases the performance. We add a further evaluation on all the level-5 problems
and find the PoT with the system message has an overall accuracy of 35.82%, which is lower than
the accuracy of PoT without the system message (37.67% as shown in the main results in Table 1).
We hypothesize the difference in the prompt format across different methods is the reason for this
behavior. The method with the Vanilla prompt imitates a conversation between the LLM agent and
humans via natural language, but PoT prompt is in Python code format, which explicitly directs the
model for code completion. Thus, the system message ”you are a helpful assistant” is more suitable
for Vanilla prompt but doesn’t align with PoT. More investigation is needed to understand the effect
of system messages.
|Problem Count|Algebra C.Prob I.Alg N.Theory Prealg Precalc Total 50 50 50 50 50 50 300|
|---|---|
|MathChat PS PoT w/o sys PoT w/ sys Vanilla w/o sys Vanilla w/ sys Few-shot (k=3)|30 24 8 34 28 10 134 17 19 12 31 26 5 110 20 19 9 24 24 7 103 18 23 9 23 29 7 109 14 4 0 4 13 1 35 26 13 1 17 21 1 79 21 6 2 18 24 1 72|
|---|---|
Table 5: Results for few-shot prompt, PoT w/ and w/o system message, Vanilla prompt w/ and w/o
system message.
D SUPPLEMENTARY FAILURE ANALYSIS
D.1 FAILURE UNDER DIFFERENT FORMS OF PROBLEM-SOLVING PROCESSES IN MA T HCH A T
The default prompt in MathChat allows the LLM agent to choose from different forms of problemsolving processes to solve the problem, and we investigate how choosing different forms could affect
the performance. We plot the correct rate when a problem from each category is solved with three
forms of problem-solving approaches depending on the existence and validity of the query in the
-----
|The product of the possible integer values of n is:|-15|Col3|
|---|---|---|
Write a python program that answers the following question: A point (2,n) is 5 units away from the point (−1,1). What is the product of all
possible integer values for ?n
Problem
You can solve this problem using the distance formula between two points:
_distance = sqrt((x2 −_ _x1)[2]_ + (y2 − _y1)[2])_
In this case, the distance is 5 units, and the two points are (2, n) and (-1, 1): GPT-4
5 = sqrt((2 −(−1))[2] + (n −1)[2])
Now, you can write a Python program to find all the possible integer values of n:
from math import sqrt
def find_possible_n_values():
possible_n_values = []
for n in range(-10, 10):
distance = sqrt((2 - (-1))**2 + (n - 1)**2)
if distance == 5:
possible_n_values.append(n)
return possible_n_values
def find_product_of_n_values(possible_n_values):
product = 1
for n in possible_n_values:
product *= n
return product
possible_n_values = find_possible_n_values()
product = find_product_of_n_values(possible_n_values)
print("The possible integer values of n are:", possible_n_values)
print("The product of these possible integer values is:", product)
This Python program first defines a function `find_possible_n_values` that iterates over a range of integer values for n, calculates the distance
using the given formula, and checks whether the distance is equal to 5. It then appends the valid integer values of n to a list of possible n values.
Another function, `find_product_of_n_values`, is then values of n … (Explanation of Code)
The possible integer values of n are: [-3, 5]
The product of these possible integer values is: -15 User Proxy Agent
Please put the final answer in \boxed{}.
The product of the possible integer values of n is: -15
Figure 9: An example of the process of PS. The query result will be returned to the LLM assistant
and ask it to put the answer in box. The process of PS is exactly the same as MathChat when the
agent in MathChat chooses to solve the problem with one Python program.
import math
import numpy as np
import sympy as sp
# Question: {problem}
# Answer this question by implementing a solver() function.
def solver():
# Let's write a Python program step by step, and then return the answer
# Firstly, we need define the following variable:
Figure 10: PoT prompt. Comparing to the original prompt from Chen et al. (2022), we add
"import sympy as sp" that gives the LLM agent hint to use the sympy library. The placeholder "{problem}" will be replaced with the actual problem.
generated solution in Figure 11: 1. The LLM agent doesn’t make any tool-using queries (Python)
when solving the problem. 2. The agent makes one or more queries, but at least one query is
invalid. 3. The agent makes all valid queries. It is shown in the plot that using Python correctly
could significantly increase the correct rate, while doesn’t use Python is worse than using Python
but having invalid queries. The results in Figure 11 show that especially for intermediate algebra
-----
70 No Query
Have Invalid Queries
60 All Valid Queries
50
40
30
Percentage Correct20
10
0
I.Alg Total
Algebra C.Prob N.Theory Prealg Precalc
Category
Figure 11: Success rate of MathChat under different forms of problem-solving processes: 1. the
LLM agent solves the problem without making any tool-using queries. 2. The agent makes queries
and has at least one invalid query 3. All queries made are valid.
Problem: Suppose x + y _x ∈[−5, −3] and y ∈[2,4]. What is the largest possible value of_ digit and the units digit. How many different such four-digit numbers could Problem: The four-digit number B 25AB is divisible by nine, with the tens ProblemA
_x −_ _y_ ? Problem 25AB represent?
To find the largest possible value of _x + y_ , we need to maximize the value of the Key Idea: A number is divisible by 9 if the sum of its digits is divisible by 9.
fraction. We can first consider how the numerator and denominator are affected by the x − _y_
given intervals of and . x _y_ We will now write a Python program to find all the possible values of and A _B_
such that 25AB is divisible by 9. For that, we will iterate through all possible
Since x ∈[−5, −3] and y ∈[2,4], the numerator x + y will be in the range [−3,1] values for each digit and check if the sum of digits is divisible by 9. GPT-4 LLM
and the denominator x − _y will be in the range [−9, −5]. Therefore, we can try to_ Agent
maximize the value of the fraction by choosing the largest possible value for the numerator while choosing the smallest possible value for the denominator. LLM from sympy import Symbol
Agent # Initialize possible values
Here's a Python program using the sympy package possible_values = []
to calculate the largest possible value of the fraction _x + y_ : User Proxy Agent
_x −_ _y_ # Iterate through all possible values of A (1 to 9) and B (0 to 9)
for A in range(1, 10):
from sympy import symbols, Rational for B in range(10):
# Calculate the sum of the digits
x, y = symbols('x y') sum_digits = 2 + 5 + A + B
numerator = x + y
denominator = x - y
fraction = numerator / denominator # Check if the sum of the digits is divisible by 9
if sum_digits % 9 == 0:
x_range = (-5, -3) possible_values.append((A, B))
y_range = (2, 4)
print(possible_values)
max_num = numerator.subs(x, x_range[1]).subs(y, y_range[1])
min_denom = denominator.subs(x, x_range[0]).subs(y, y_range[0])
Now, we will run this Python code to find possible values of and . A _B_
max_value = Rational(max_num, min_denom)
print(max_value)
Please run the code and provide the result. [(1, 1), (2, 0), (2, 9), (3, 8), (4, 7), (5, 6), (6, 5), (7, 4), (8, 3), (9, 2)] User Proxy Agent
-1/7 User Proxy Agent … Our final answer is 10. LLM
Agent
Type 1 Failure. The LLM assistant first calculates the range for numerator and Type 2 Failure. The LLM assistant has the correct idea to iterate
denominator separately and then chooses different and for numerator and denominator to x _y_ and find all numbers that sum of digits is divisible by 9. But for A, it didn’t
maximize the value, which is the wrong direction. include 0 as a possible digit, missing the number 2502 where (A,B) = (0,
A correct solution: Maximizing x + y is equivalent to maximizing _x + y_ 2x . 2). If change “range(1, 10)” to “range(10)”, the final answer is 11, which is
_x −_ _y_ _x −_ _y_ [+ 1 =] _x −_ _y_ [= −2]y −[x]x correct.
Note that −2xand y − _x are always positive, so to maximize this expression, we take y = 2, the_
smallest possible value ofy .
Then maximizing _xx + 2 −2_ is equivalent to maximizing _xx + 2 −2 [−1 =]_ _x −2 4_ [= −] 2 −4 _x_ [.]
Note that 2 − _x is always positive, so to maximize this expression, we take x = −5. Hence, the_
maximum value is −5 + 2−5 −2 [= 3]7
Figure 12: Additional example of Type 1 failure (Fail to devise a proper plan) and Type 2 failure
(Fail to execute the plan flawlessly).
and prealgebra, the gap in accuracy between ”using no query” and ”have invalid query” is large,
indicating that using Python is very helpful to solve problems from the two categories.
-----
Problem: The equation y = _[x][ +][ A]_ where A, B, and are integers, is shown below. What is C _A + B + C?_
_Bx + C_ [,]
Problem
[asy]
import graph; size(8.14cm); real lsf=0.5; pen dps=linewidth(0.7)+fontsize(10); defaultpen(dps); pen ds=black; real
xmin=-2.52,xmax=5.62,ymin=-4.28,ymax=3.32;
pen
cqcqcq=rgb(0.75,0.75,0.75);
…
[/asy]
Type 3 Failure. Since the code between [asy] and [/asy] is removed, it is not possible to solve the
problem. The information from the description is not enough to solve the problem.
Figure 13: An example of Type 3 failure where the ASY code is removed.
D.2 EXAMPLES OF 3 TYPES OF FAILURES
In Section 6.1, we summarize 3 main type of failures: type 1: failure to devise an appropriate plan.
type 2: failure to flawlessly execute the plan. type 3: other technical errors. We give one additional
example for type 1 error and type 2 error, and an example where the removal of ASY code leads to
a leak of information (Figure 12, Figure 13). We note that among all the problems, the ASY code
from 72 problems are removed, but 12 problems can still be solved correctly.
D.3 FAILURE UNDER DIFFERENT METHODS
We present the precalculus example where MathChat fails but all other methods success (Figure 15, Figure 16, Figure 17). The results from PS and PoT show that it is easy to get this problem
correct with Python (using the sympy.simplify function). However, in MathChat, the LLM
agent chooses to solve the problem via direct reasoning. Both MathChat and vanilla prompt solve
this problem by writing extremely long derivations. MathChat solves the problem with an even
longer step-by-step response and makes a calculation error during the process.
Additionally, we also provide an overview of the number of problems where all methods either fail
or succeed in Table 6.
|All Success All Fail|46 13 0 18 45 1 57 32 171 20 36 86|176 402|
|---|---|---|
Algebra C.Prob I.Alg N.Theory Prealg Precalc Total
Table 6: The number of problems where all methods fail, and all methods succeed.
D.4 THE RELATIONSHIP BETWEEN FAILURE RATE AND GENERATED SOLUTION LENGTH
Chain of Thought (CoT) prompting shows that extra reasoning steps for a problem can improve the
ability of LLMs Wei et al. (2022). With GPT-4, explicit reasoning is no longer an issue. Instead, we
find that a long and tedious reasoning process may result in more type 2 failures, such as calculation
errors, which results in a wrong answer even the overall direction is correct. We plot the distribution
of correct and wrong answer lengths and also the answer length of the given solution (The length of
the string list from splitting with a single space is used here). Since more complex and challenging
problems are likely to have a longer solving process but still a lower success rate, we separate problems from Intermediate Algebra and Precalculus with other categories (Figure 14), to distinguish
less challenging problems from harder problems. We note that the success rate of MathChat on
the four less challenging categories goes over 50%, but the rate is lower than 20% for Intermediate
Algebra and Precalculus.
GPT-4
Problem
-----
350
300
250
200
150
100
50
100
80
60
40
20
|Col1|MathChat Correct MathChat Wrong|
|---|---|
|Col1|MathChat Correct MathChat Wrong|
|---|---|
200 400 600 800 1000 1200 1400
Given Solution
MathChat Correct
MathChat Wrong
Length
200 400 600 800 1000 1200 1400
Given Solution
MathChat Correct
MathChat Wrong
Length
Figure 14: Distribution of solution length of both correctly and incorrectly solved problems in
MathChat. The distribution of length of the given solution (ground truth) is also shown. The
left figure represents the distribution of the less challenging categories and the right figure represents problems from Intermediate Algebra and Precalculus. We cut off outliers that the split string
length is longer than 1500.
Overall, the solution length of MathChat is longer than the ground truth solution. The length of the
given solution on the two fundamentally challenging categories is longer than other categories. For
MathChat, correct answers and wrong answers from the less challenging categories have a similar
distribution in solution length, where the majority of problems are solved with 50 to 500 string
length. For harder problems, however, an answer with more than 600 string lengths is likely to be
wrong. From the precalculus problem shown in Figure 17, the LLM agent can choose a plausible
strategy to solve the problem, but that strategy is less efficient and involve more math operations
compared to the given solution, this results in a much longer response, and it is more likely to make
errors during the process.
-----
Problem: Given constants and C _D,_ suppose that tan A and tan B are the solutions to x[2] + Cx + D = 0, where
tan(A + B) is defined.
Simplify sin[2](A + B) + C sin(A + B)cos(A + B) + D cos[2](A + B) .
Your expression should contain only one of the variables A, B, C and D. Problem
By Vieta's formulas, tan A + tan B = − _C and tan A tan B = D ._ Then from the angle addition formula,
_C_
tan(A + B) = [tan][ A][ + tan][ B] We write the expression we are interested in, in terms oftan(A + B) :
1 −tan A tan B [= −] 1 − _D_ [.]
sin[2](A + B) + C sin(A + B)cos(A + B) + D cos[2](A + B) SolutionCorrect
= cos[2](A + B)tan[2](A + B) + C cos[2](A + B)tan(A + B) + D cos[2](A + B)
= cos[2](A + B)(tan[2](A + B) + C tan(A + B) + D)
cos[2](A + B)
=
sin[2](A + B) + cos[2](A + B) [(tan][2][(][A][ +][ B][) +][ C][ tan(][A][ +][ B][) +][ D][)]
1
=
tan[2](A + B) + 1 [⋅(tan][2][(][A][ +][ B][) +][ C][ tan(][A][ +][ B][) +][ D][) .]
Then
1
tan[2](A + B) + 1 [⋅(tan][2][(][A][ +][ B][) +][ C][ tan(][A][ +][ B][) +][ D][)]
2
1 _C_ _C_
= (− 1 −C _D_ [)][2][ + 1] ⋅ ((− 1 − _D_ [)] − _C ⋅_ 1 − _D_ [+][ D])
(1 − _D)[2]_
=
(1 − _D)[2]_ + C[2][ ⋅] _[D][(][C][2](1 −[ + (1 −]D)[2][D][)][2][)]_
= D
To write a Python program that answers this question, we first need to convert the mathematical expressions into
Python code using math librariFirst, let's find the sum and product of $\tan A$ and $\tan B$ using the givens:
We have tan A + tan B = − _C and tan A ⋅tan B = D ._ Vanilla
Now, using the angle addition formula, we have:
sin(A + B) = sin A cos B + cos A sin B
and
cos(A + B) = cos A cos B −sin A sin B
But then, using the expressions for tan, we can rewrite these as:
sin(A + B) = [sin][ A]
cos A [cos][ B][ + cos][ A][ sin]cos[ B] B [= sin][ A][ cos][ B][ cos]cos[2][ A][2] _A[ + sin] cos[2][ B]B[ cos][ A][ cos][2][ B]_
and
cos(A + B) = [cos][ A]
sin A [sin][ B][ −cos]sin B[ B] [sin][ A][ = sin][2][ A][ sin][ B][ cos]sin[ B][2] _A[ −sin] sin[2]_ [2]B[ B][ sin][ A][ cos][ A]
Now we square both expressions and add:
sin[2](A + B) + C sin(A + B)cos(A + B) + D cos[2](A + B)
= [(((sin][ A][ cos][ B][ cos][2][ A][)][2][ + (2)(sin][ A][ cos][ A][ sin][ B][ cos][ B][)(sin][ A][ cos][ B][ cos][2][ A][)(sin][ B][ cos][ B][ cos][2][ A][)) + (sin][ A][ cos][ A][ sin][ B][ cos][ B][ cos][2][ A][)][2]
cos[4] _A cos[4]_ _B_
+C [((sin][2][ A][ sin][ B][ cos][ B][)][2][ −(2)(sin][ A][ cos][ A][ sin][ B][ cos][ B][)(sin][2][ A][ sin][ B][ cos][ B][)(sin][2][ B][ sin][ A][ cos][ A][) + (sin][2][ B][ sin][ A][ cos][ A][)][2]
sin[2] _A sin[2]_ _B cos[2]_ _A cos[2]_ _B_
+D [((sin][2][ A][ sin][ B][ cos][ B][)][2][ + (sin][ B][ cos][ B][ sin][ A][ cos][ A][ −sin][ A][ cos][ B][ sin][ B][ cos][ A][)(sin][ A][ cos][ A][ sin][ B][ cos][ B][ cos][2][ A][) + (sin][ A][ cos][ B][ sin][ B][ cos][ A][)][2][)]
sin[4] _A sin[4]_ _B_
Using the sums and products of $\tan A$ and $\tan B$, $\tan A\cdot \tan B = D$:
= [sin][2][ A][ cos][2][ B][ + 2 sin][ A][ cos][ A][ sin][ B][ cos][ B][(tan][ A][ + tan][ B][) + sin][2][ B][ cos][2][ A][ −] _[C][ sin][ A][ sin][ B][ cos][ A][ cos][ B][(tan][ A][ + tan][ B][)(sin][ A][ sin][ B][) +][ D][ sin][2][ A][ sin][2][ B]_
cos[2] _A cos[2]_ _B sin[2]_ _A sin[2]_ _B_
Noticing that tan A + tan B = − _C and canceling some terms, we find that the expression simplifies to: D_
Figure 15: The precalculus problem where other methods are correct but MathChat is wrong. This
figure shows the ground truth solution and the response with vanilla prompt.
-----
Problem: Given constants and C _D,_ suppose that tan A and tan B are the solutions to x[2] + Cx + D = 0, where
tan(A + B) is defined.
Simplify sin[2](A + B) + C sin(A + B)cos(A + B) + D cos[2](A + B) .
Your expression should contain only one of the variables A, B, C and D.
To write a Python program that answers this question, we first need to convert the mathematical expressions into
Python code using math libraries. Then we will simplify the given expression containing sinusoidal functions.
PS
Here is the program:
import math
import cmath
import sympy as sp
# Constants
C, D = sp.symbols('C D')
# Solving the quadratic equation
x = sp.symbols('x')
quadratic_eq = x**2 + C*x + D
solutions = sp.solve(quadratic_eq, x)
# Assuming tan(A) and tan(B) are two solutions
tan_A = solutions[0]
tan_B = solutions[1]
# Using properties of tan to find A and B
A = sp.atan(tan_A)
B = sp.atan(tan_B)
# Finding sin(A+B) and cos(A+B)
sin_A_plus_B = sp.sin(A + B)
cos_A_plus_B = sp.cos(A + B)
# Simplifying the given expression
expression = sin_A_plus_B**2 + C * sin_A_plus_B * cos_A_plus_B + D * cos_A_plus_B**2
simplified_expression = sp.simplify(expression)
print(f"The given expression simplified is: {simplified_expression}")
import math
import numpy as np POT
import sympy as sp # added
def solver():
# Define the symbols for the constants and angles
C, D, A, B = sp.symbols('C D A B')
# Now, we know that tan(A) and tan(B) are solutions to the quadratic equation x^2 + Cx + D = 0
# We can represent this equation using Sympy
x = sp.Symbol('x')
quadratic_eq = x**2 + C*x + D
# Now, let's use Sympy to find the solutions
tan_values = sp.solve(quadratic_eq, x)
# Now, we have tan(A) and tan(B) as elements of the tan_values list
tan_A = tan_values[0]
tan_B = tan_values[1]
# Next, find sin(A), cos(A), sin(B), and cos(B) from tan(A) and tan(B)
sin_A = tan_A / sp.sqrt(1 + tan_A**2)
cos_A = 1 / sp.sqrt(1 + tan_A**2)
sin_B = tan_B / sp.sqrt(1 + tan_B**2)
cos_B = 1 / sp.sqrt(1 + tan_B**2)
# Now we have all the necessary components to find sin(A + B) and cos(A + B)
sin_AB = sin_A * cos_B + cos_A * sin_B
cos_AB = cos_A * cos_B - sin_A * sin_B
# Now we can form the given expression in terms of sin(A + B) and cos(A + B)
given_expr = sin_AB**2 + C * sin_AB * cos_AB + D * cos_AB**2
# Simplify the given expression
simplified_expr = sp.simplify(given_expr)
return simplified_expr
print(solver())
Figure 16: The precalculus problem where other methods are correct but MathChat is wrong
(Continued). This figure shows the PS and PoT code. Both code returns the correct result: "D".
-----
Problem: Given constants and C _D,_ suppose that tan A and tan B are the solutions to
_x[2]_ + Cx + D = 0, where tan(A + B) is defined. Problem
Simplify sin[2](A + B) + C sin(A + B)cos(A + B) + D cos[2](A + B) .
Key Idea: We will use the relationship between tan A, tan B, and properties of sine and cosine.
LLM
Step 1: Since tan A and tan B are the solutions of the given equation, we have: Agent
tan A + tan B = − _C_
tan A ⋅tan B = D
Step 2: Convert the expression sin[2](A + B) + C sin(A + B)cos(A + B) + D cos[2](A + B) to an expression in terms
of tan A and tan B.
Using the formula for sin(A + B) = sin A cos B + cos A sin B and cos(A + B) = cos A cos B −sin A sin B, we
have:
sin[2](A + B) + C sin(A + B)cos(A + B) + D cos[2](A + B) =
((sin A cos B + cos A sin B)[2] + C(sin A cos B + cos A sin B)(cos A cos B −sin A sin B) + D(cos A cos B −sin A sin B)[2]
Continue. Please keep solving the problem until you need to query. (If you get to the answer, … User Proxy Agent
Step 3: We will now expand the expression and simplify it using the relations between tan A, tan B, and C,
D:
((sin A cos B + cos A sin B)[2] + C(sin A cos B + cos A sin B)(cos A cos B −sin A sin B) + D(cos A cos B −sin A sin B)[2])
Expanding the terms: LLM
=(sin[2] _A cos[2]_ _B + 2 sin A cos A sin B cos B + cos[2]_ _A sin[2]_ _B)_ Agent
+C(sin A cos A cos[2] _B −sin[2]_ _A sin B cos B + cos A cos B cos[2]_ _A −cos[2]_ _A sin A sin B)_
+D(cos[4] _A −2 sin A cos A sin B cos B + sin[2]_ _A sin[2]_ _B)_
Now we use the identity sin[2] _x + cos[2]_ _x = 1_
Step 4: Rewrite the expression in terms of tan A and tan B:
=(sin A cos B)(sin A cos B + 2 cos A sin B)
+C(sin A cos[2] _B cos A −sin[2]_ _A sin B cos B + cos A cos B cos[2]_ _A −sin B cos[2]_ _A cos A)_
+D(cos A sin B)(cos[2] _A −sin A cos A cos B −sin B)_
Now we substitute sin A with
tan A 1
and cos A with, and similarly for : B
1 + tan[2] _A_ 1 + tan[2] _A_
tan A tan B tan A + tan B tan[2] _A tan[2]_ _B_
= + C
( 1 + tan[2] _A_ 1 + tan[2] _B)(tan[2]_ _A + tan[2]_ _B + 2)_ ( ( 1 + tan[2] _A_ 1 + tan[2] _B)(tan[2]_ _A + tan[2]_ _B + 1)_ [)][ +][ D] (1 + tan[2] _A)(1 + tan[2]_ _B)_
We will use the relations tan A + tan B = − _C and tan A ⋅tan B = D._
Step 5: Make substitutions and simplify:
−D −C _D[2]_
= + C
( 1 + tan[2] _A_ 1 + tan[2] _B)(tan[2]_ _A + tan[2]_ _B + 2)_ ( ( 1 + tan[2] _A_ 1 + tan[2] _B)(tan[2]_ _A + tan[2]_ _B + 1)_ [)][ +] (1 + tan[2] _A)(1 + tan[2]_ _B)_
Notice that the three terms have common denominators. If we combine them, we get:
= [−][D][(tan][2][ A][ + tan][2][ B][ + 1) −] _[C][2][(tan][2][ A][ + tan][2][ B][ + 2) +][ D][2][(1 + tan][2][ A][)(1 + tan][2][ B][)]_
( 1 + tan[2] _A_ 1 + tan[2] _B)(tan[2]_ _A + tan[2]_ _B + 1)(1 + tan[2]_ _A)(1 + tan[2]_ _B)_
Step 6: Simplify the expression and obtain the final answer:
−C[2]
=
_D_
Figure 17: The precalculus example where all the other methods are correct but MathChat is
wrong (Continued). This figure shows the conversation generated in MathChat. The LLM agent
in MathChat chooses to solve the problem via direct reasoning, and it makes a calculation error
when expanding the terms in Step 3.
-----
| [
"Yiran, Wu",
"Yue, Wang",
"Feiran, Jia",
"Shaokun, Zhang",
"Hangyu, Li",
"Erkang, Zhu",
"Yin Tat, Lee",
"Richard, Peng",
"Qingyun, Wu",
"Chi, Wang"
] | 2023-01-01T00:00:00 | ICLR 2024 Workshop on LLM Agents | false | 0 | 0 | null | null | null | null |
Arithmetic Feature Interaction Is Necessary for Deep Tabular Learning | Until recently, the question of the effective inductive bias of deep models on tabular data has remained unanswered. This paper investigates the hypothesis that arithmetic feature interaction is necessary for deep tabular learning. To test this point, we create a synthetic tabular dataset with a mild feature interaction assumption and examine a modified transformer architecture enabling arithmetical feature interactions, referred to as AMFormer. Results show that AMFormer outperforms strong counterparts in fine-grained tabular data modeling, data efficiency in training, and generalization. This is attributed to its parallel additive and multiplicative attention operators and prompt-based optimization, which facilitate the separation of tabular samples in an extended space with arithmetically-engineered features. Our extensive experiments on real-world data also validate the consistent effectiveness, efficiency, and rationale of AMFormer, suggesting it has established a strong inductive bias for deep learning on tabular data. Code is available at https://github.com/aigc-apps/AMFormer. | Results show that AMFormer outperforms strong counterparts in fine-grained tabular data modeling, data efficiency in training, and generalization, suggesting it has established a strong inductive bias for deep learning on tabular data. | ## Arithmetic Feature Interaction Is Necessary for Deep Tabular Learning
**Yi Cheng[123*], Renjun Hu[4][∗], Haochao Ying[135†], Xing Shi[4], Jian Wu[135], Wei Lin[4]**
1State Key Laboratory of Transvascular Implantation Devices of the Second Affiliated Hospital, Zhejiang University School of
Medicine, China
2School of Software Technology, Zhejiang University, China
3Institute of Wenzhou, Zhejiang University, China
4Alibaba Group
5School of Public Health, Zhejiang University, China
_{chengy1, haochaoying, wujian2000}@zju.edu.cn, {renjun.hrj, shubao.sx, weilin.lw}@alibaba-inc.com_
**Abstract**
largely relies on the tree growth strategy, where each leaf
exhaustively enumerates the splitting features and values,
selecting the feature-value pair with the highest improvement on a certain criterion to divide the sample space. As
a result, complex non-linear relationships between variables
can be effectively captured. Meanwhile, since raw features
directly involve, tree models often assume the features have
been well-engineered (Micci-Barreca 2001).
In recent years, deep learning has become increasingly
popular as a means to reduce the need for time-consuming
and cumbersome feature engineering when dealing with tabular data. Early attempts at integrating deep neural networks
(DNNs) aim to model high-order feature interactions (Cheng
et al. 2016; Guo et al. 2017; Lian et al. 2018; Cheng et al.
2022; Chen et al. 2022); however, this paradigm requires a
careful balance between model expressiveness and overfitting. To overcome this, researchers have turned to extending generalized additive models with DNNs to boost expressiveness in a more constrained manner (Agarwal et al.
2021; Radenovic, Dubey, and Mahajan 2022; Enouen and
Liu 2022; Chen et al. 2023a), thereby preventing overfitting
and increasing interpretability. Others have explored treeinspired architectures that emulate key elements of tree models using neural networks, taking advantage of the strengths
of both techniques (Popov, Morozov, and Babenko 2020;
Katzir, Elidan, and El-Yaniv 2021). Transformer has also
been investigated for its success in the natural language and
vision fields (Song et al. 2019; Gorishniy et al. 2021; Chen
et al. 2023b; Yan et al. 2023).
Despite multiple attempts, the effectiveness of deep learning on tabular data remains questionable (Qin et al. 2021)
due to the unstable improvement over tree ensemble baselines, and tabular datasets have been considered as the last
“unconquered castle” for deep learning (Kadra et al. 2021).
The central question is whether deep models have an effective inductive bias on tabular data. In this paper, we argue
that arithmetic feature interaction is necessary for deep tabular learning. More specifically, the classic transformer is
found to be proficient at obtaining a compressed and sparse
representation for the input (Yu et al. 2023) to benefit the
downstream tasks. TANGOS (Jeffares et al. 2023), by regu
Until recently, the question of the effective inductive bias
of deep models on tabular data has remained unanswered.
This paper investigates the hypothesis that arithmetic feature interaction is necessary for deep tabular learning. To
test this point, we create a synthetic tabular dataset with a
mild feature interaction assumption and examine a modified
transformer architecture enabling arithmetical feature interactions, referred to as AMFormer. Results show that AMFormer outperforms strong counterparts in fine-grained tabular data modeling, data efficiency in training, and generalization. This is attributed to its parallel additive and multiplicative attention operators and prompt-based optimization,
which facilitate the separation of tabular samples in an extended space with arithmetically-engineered features. Our extensive experiments on real-world data also validate the consistent effectiveness, efficiency, and rationale of AMFormer,
suggesting it has established a strong inductive bias for deep
learning on tabular data. Code is available at https://github.
com/aigc-apps/AMFormer.
**1** **Introduction**
Tabular data is an extensively utilized and essential data format that finds its applications in diverse fields, including
finance, marketing, medical science, and recommendation
systems (Moro, Cortez, and Laureano 2011; Johnson et al.
2016; Goldberger et al. 2000; Harper and Konstan 2016;
Cheng et al. 2022). Such data often contains both categorical
and numerical features, each of which holds its own specific
meaning and relates to various modeling aspects. Due to the
heterogeneity and potential sparsity of features, analyzing
tabular data has remained a subject of research in the machine learning community. Among the many solutions proposed, tree ensemble models (Chen and Guestrin 2016; Ke
et al. 2017; Prokhorenkova et al. 2018) have emerged as the
predominant choice, owing to their performance on various
domains and robustness to data quality issues. Their success
*Equal contribution. Work done during Cheng’s internship at
Alibaba Group, under the guidance of Hu.
†Corresponding author.
Copyright © 2024, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
-----
larizing neurons to focus on sparsity, also confirms this benefit. We contend that, however, it is less capable of mining meaningful feature interactions through arithmetic operations. The importance of such interaction has been verified in various domains, e.g., the serum triiodothyronine to
thyroxine (T3/T4) ratio for thyroid disorder diagnosis (Mortoglou and Candiloros 2004) and the body mass index (BMI)
for obesity assessment. In summary, tabular deep learning
with the classic transformer is somehow similar to the sample space division based on raw features, and incorporating
arithmetic feature interaction explicitly allows for better separation of samples in an extended space with automatically
engineered features.
To validate our hypothesis, we create a synthetic dataset
based on a mild feature interaction assumption inspired
by (Enouen and Liu 2022). The dataset consists of eight features and responses are formulated as an additive mixture of
arithmetic feature combinations that remain sparse, limited
in interaction order, and deterministic. We compare the performance of XGBoost (Chen and Guestrin 2016), the classic transformer, and our modified AMFormer, a transformerlike architecture enabling arithmetic feature interaction, on
this data. Our results reveal that, in the presence of feature interaction, AMFormer significantly outperforms (up to
+57%) the other models for fine-grained tabular data modeling. Additionally, by explicitly learning arithmetic interaction, AMFormer also obtains substantial improvements in
terms of data efficiency in training (up to +16%) and generalization (up to +20%) compared to its counterparts. We
describe the details of dataset construction and our empirical results in Section 3. These findings have clearly demonstrated the effectiveness of our AMFormer as a general module for deep tabular learning.
The above strengths of AMFormer are rooted in two key
features that address the primary challenges posed by feature heterogeneity during model fitting. The first challenge is
the risk of underfitting caused by missing essential features,
while the second is the risk of overfitting caused by irrelevant correlations in redundant features. To compensate for
features that require arithmetic feature interaction, we equip
AMFormer with parallel attention operators responsible for
extracting meaningful additive and multiplicative interaction
candidates. Along the candidate dimension, these candidates
are then concatenated and fused using a down-sampling linear layer, allowing each layer of AMFormer to capture arithmetic feature interaction effectively. To prevent overfitting
caused by feature redundancy, we drop self-attention and
use two sets of prompt vectors as addition and multiplication queries. This approach gives AMFormer constrained
freedom for feature interaction and, as a side effect, optimizes both memory footprint and training efficiency. By integrating these two designs with the transformer, the resulting model could better analyze tabular data based on more
accurate sample separation.
We further evaluate AMFormer by comparison with six
baseline approaches on four real-world tabular datasets,.
Through our extensive experiments, we find that AMFormer
is generally effective for deep tabular learning: it could be
plugged into existing transformer-based methods, such as
AutoInt (Song et al. 2019) and FT-Transformer (Gorishniy et al. 2021), consistently providing improvement across
all datasets. Furthermore, the two AMFormer-enhanced approaches also consistently outperform XGBoost, which is
not the case for the original backbone models. Our ablation
study also confirmed the rationale of each building block of
AMFormer. Finally, we demonstrate that our prompt optimization can improve training efficiency by an order of magnitude, making our approach more scalable for real-world
cases. Collectively, we believe that AMFormer has identified a good inductive bias of deep tabular models. The main
contributions of our work are as follows:
- We empirically verify on synthetic data that arithmetic
feature interaction is necessary for deep tabular learning
from the perspectives of fine-grained data modeling, data
efficiency in training, and generalization.
- We implement the idea in AMFormer, which enhances
the transformer architecture with arithmetic feature interaction through the parallel additive and multiplicative
attention operators and prompt-based optimization.
- We also verify the effectiveness and efficiency of AMFormer through extensive tests on real-world data.
**2** **Related Work**
In this section, we review related machine-learning methods
for tabular data analysis and briefly introduce the ideas of
local attention that inspire our prompt-based optimization.
**Traditional methods. Tabular data could be naturally**
viewed as multi-dimensional vectors. Therefore, many classic machine-learning methods are applicable for mining tabular data, e.g., logistic regression, decision trees, and support vector machines (Bishop 2006). Since features in tabular data are of varying importance typically, generalized
additive models (GAM) (Hastie and Tibshirani 1986; Lou,
Caruana, and Gehrke 2012) remain popular for tabular data
analysis. These models are more accurate than simple linear
models with the introduction of shape functions and could
produce the importance of individual features as model interpretation. Pairwise interactions have also been incorporated in GAMs for better model fitness (Lou et al. 2013). Finally, gradient-boosted tree ensemble models, such as XGBoost (Chen and Guestrin 2016), LightGBM (Ke et al.
2017), and CatBoost (Prokhorenkova et al. 2018), are usually among the most effective in this category and have been
widely deployed in real-life systems.
**Deep learning models. As mentioned earlier, deep learn-**
ing has been explored for dealing with tabular data recently,
and the attempts could be classified into four classes. (C1)
In Wide&Deep (Cheng et al. 2016) and deep factorization
machines (Guo et al. 2017; Lian et al. 2018), multi-layer
perceptrons (MLPs) are stacked aside the traditional shallow components to capture high-order feature interaction.
(C2) NAM (Agarwal et al. 2021), NBM (Radenovic, Dubey,
and Mahajan 2022), and SIAN (Enouen and Liu 2022) combine GAMs with deep learning to enhance model expressive while still retraining interpretation. (C3) Alternatively,
NODE (Popov, Morozov, and Babenko 2020) and NetDNF (Katzir, Elidan, and El-Yaniv 2021) emulate key ele
-----
80
60
80
60
40
20
80
60
40
20
40
20
Transformer
AMFormer +28% [+26%]
+32%
+41%
+40%
+42%
10 20 30 40 50 100
|Col1|Col2|AM|Col4|Col5|Former|Col7|Col8|+32|Col10|Col11|+28 %|Col13|Col14|+26 %|Col16|Col17|%|Col19|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||+40|||+41 %|||%|||||||||||
||||||||||||||||||||
|+42||%|||||||||||||||||
||||||||||||||||||||
||||||||||||||||||||
||||||||||||||||||||
_f_ (%) of training data
_1_
(b) Data efficiency in training
TransformerAMFormer +46% [+31%]
+38%
+31%
+34%
+39%
|Col1|Col2|AM|Col4|Col5|Former|Col7|Col8|+38|Col10|Col11|+46 %|Col13|Col14|%|Col16|Col17|Col18|Col19|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||+34|||+31 %|||%|||||||||||
||||||||||||||||||||
|+39||%|||||||||||||||||
||||||||||||||||||||
||||||||||||||||||||
5 10 20 40 60 80
_f_ (%) of training data (minority)
_2_
(c) Generalization
|+1%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|XG Tra AM|boost nsformer Former|Col11|Col12|Col13|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||+14%||+26|%|||||||
||||||||+30%||||||
|||||||||||+57%|||
||||||||||||||
+1% XGboost
Transformer
AMFormer
+14%
+26%
+30%
+57%
4 64 128 256 512
number of classes (C)
(a) Fine-grained tabular data modeling
Figure 1: Results on synthetic data. The +x% in the figure are the relative improvement of AMFormer over Transformer.
ments of tree models using neural networks, e.g., differentiable oblivious decision trees, and disjunctive normal forms
with soft neural AND/OR gates. These tree-like models
could then enjoy the merits of deep learning. (C4) Finally,
relying on the global crossing capability, the transformer architecture has also been adapted to deep tabular learning.
Typical models include FT-Transformer (Gorishniy et al.
2021), AutoInt (Song et al. 2019), and DCAP (Chen et al.
2021), where the attention mechanism is utilized for additive feature interaction.
Despite the above efforts, it still remains open for effective inductive bias of deep tabular learning. In this paper, we
also consider transformer architecture as a promising candidate and further argue for the essential role of arithmetic
feature interaction for deep tabular learning. Our proposed
AMFormer enhances the classic transformer with such interaction through the parallel additive and multiplication attention operators and the interaction fusion layer. This is
among the first in the literature and AMFormer has demonstrated consistently better performance compared to other
transformer-based methods.
**Local attention. The square nature of transformer would**
make it less efficient for a large context, which is the case
for tabular data consisting of a large number of features.
In this situation, an efficient transformer is required. Since
words and pixels inherently possess continuity, it is natural to use local attention techniques to optimize efficiency.
For instance, PVT (Wang et al. 2021b) utilizes convolutional
layers after each attention block to gather local information
and gradually condenses the features. LongFormer (Beltagy,
Peters, and Cohan 2020) applies a moving window and calculates attention weights within the window. Sparse attention (Child et al. 2019) introduces sparse factorizations of
the full attention matrix. Different from the above, in this
paper, we borrow the notion of prompts to localize the receptive field of features, and the number of prompts is independent of the number of features.
**3** **Empirical Evaluation on Synthetic Data**
Recall that due to feature heterogeneity, the raw feature
space of tabular data might not necessarily contain all determining features. Accordingly, we argue that arithmetic fea
ture interaction is essential for deep tabular learning, aiming at supplementing these missing features with arithmetic
as the prior knowledge in a reserved stage. To validate our
hypothesis, we create a synthetic dataset inspired by the assumption of a generalized additive model with multi-order
while sparse feature interaction (Enouen and Liu 2022).
**Data construction. Our constructed dataset consists of**
eight features and the responses r are formulated as an additive mixture of arithmetic feature combinations:
_x[β]j_ _[ij]_ _._ (1)
_j=1_
Y
_αi_
_i=1_ _·_
X
_r =_
We fix the number K of additive terms to be much less than
the number of possible feature combinations (e.g., K = 5
in this work) and sample αi from a uniform distribution
_U_ (−1, 1). Each exponent βij is uniformly sampled from
_{1, 2, 3, 4} with a 50% chance and is set to 0 otherwise._
Thus, the expected number of involved features in each term
is 4. After selecting all α and β values, we randomly generate 200K instances of (x1, . . ., x8, r) by sampling xj from
a log uniform distribution between 0.5 and 2. The above setups ensure that our synthetic data remains deterministic and
sparse in terms of feature interaction, adhering to a mild assumption for feature interaction. Finally, all instances are divided into C equally-sized classes according to the response
values r, and we further split the data into 80%-20% for
training and testing, respectively.
**Results. To demonstrate the necessity of arithmetic fea-**
ture interaction, we compare our AMFormer (we leave its
technical details in the next section) with XGBoost and the
classic transformer on the constructed data. We first evaluate the ability of fine-grained tabular data modeling of these
approaches by varying the number C of classes from 4 to
512 and computing the test classification accuracy (Acc).
The results are reported in Fig. 1a. When varying C, the
Acc of all approaches decreases with the increment of C.
The performance of XGBoost soon drops to a low level with
_C = 64. This is because XGBoost utilizes raw feature val-_
ues xj only, which could not handle the interaction between
features and the response. Comparatively, both Transformer
and AMFormer maintain relatively high Acc with larger C,
-----
owing to the automatic feature engineering capacity of neural networks. We also find that AMFormer is consistently
better than Transformer, with larger improvement for higher
_C. This indicates the essential role of arithmetic feature in-_
teraction for fine-grained tabular data modeling.
In addition, through learning meaningful interaction patterns, we also expect AMFormer to have better training data
efficiency and generalization to minority classes. To verify
data efficiency, we fix C = 128, vary the fraction f1 of training data from 10% to 50%, and compute the test Acc of
Transformer and AMFormer, reported in Fig. 1b. We omit
XGBoost due to its uncompetitive performance. When varying f1, the Acc of both approaches increases with the increment of f1 as expected. Moreover, we observe consistently better relative improvement with less fraction of training data, e.g., +40% with f1 30% vs. the baseline +26%.
Indeed, AMFormer trained with 40% data is almost on par ≤
with Transformer on all data.
To verify generalization, we also fix C = 128, manually
turn half of the classes to minority classes by only reserving
a fraction f2 of training data, and compute the test Acc of
Transformer and AMFormer on minority classes, reported in
Fig. 1c. Note that training data on rest classes and test data
remain unfiltered. Similarly, we observe consistently better
performance, in both absolute Acc or relative improvement,
by AMFormer for minority classes.
From the above empirical evaluation, we believe that explicitly integrating arithmetic feature interaction is necessary for deep tabular learning.
**4** **Methodology**
We now present the technical details of our AMFormer. The
framework overview is given in Fig. 2, which closely resembles the classic Transformer architecture except for the
Arithmetic Block. With d denoting the dimensionality of
hidden states, AMFormer initiates the process by transforming raw features into representative embeddings, i.e., applying a 1-in-d-out linear layer for numerical features and a
_d-dimensional embedding lookup table for categorical fea-_
tures. Subsequently, these initial embeddings are processed
through L sequential layers, which serve to augment them
with vital context and interactive elements. Within each of
these layers, an arithmetic block that executes parallel additive and multiplicative attention is adopted to deliberately
foster arithmetic feature interactions. Residual connections
and feed-forward networks are reserved to facilitate the flow
of gradients and augment feature representation. Finally,
AMFormer employs either a classification or a regression
head to generate the final output based on these enriched
embeddings. The key components in the arithmetic block
include parallel attention and prompt tokens, which we will
further discuss in the following.
**4.1** **Parallel Attention**
The parallel attention is responsible for facilitating arithmetic feature interaction in AMFormer. The fundamental
concept involves harnessing two parallel attention streams,
each dedicated to computing either additive or multiplicative
interaction candidates based on input feature embeddings.
These computed candidates are subsequently concatenated
and undergo further integration through a fully-connected
(FC) layer. Consequently, the outputs of the FC layer can
represent diverse combinations of input features achieved
through arithmetic operations.
Formally, let N denote the number of input features in a
specific layer l ∈{1, . . ., L} and X ∈ R[N] _[×][d]_ the corresponding feature embedding matrix. As the layers in AMFormer are structurally identical, differing only in their parameters, we forego mentioning the layer index for ease of
notation. Recall that the classic attention (Vaswani et al.
2017) is inherently additive. We therefore derive log-scaled
embeddings in the multiplicative stream (Trask et al. 2018):
_Xlog = log(ReLU(X) + ϵ),_ (2)
where ϵ prevents log 0. Classic attention in the log space,
combined with an exponential operator, is then capable of
learning multiplicative interaction.
We next elaborate on the shared attention part of the two
streams, which aims to prepare useful additive and multiplicative feature interaction for the prediction task. Taking
the additive stream for instance, we first generate the query
_Q = XW_ _[Q], key K = XW_ _[K], and value V = XW_ _[V]_ embeddings with input embeddings X and trainable parameters
_W_ _[Q/K/V]_ _∈_ R[d][×][d]. The product QK _[T]_ gives the affinity between input features and could be understood as the likelihood of yielding meaningful interaction in our case. The output by the additive stream, calculated as the weighted sum
of value embeddings according to the normalized product:
_O[A]_ = softmax( _[QK]_ _[T]_
_√_
)V ∈ R[N] _[×][d],_ (3)
enriches each input feature with vital interactive information
from other features and, hence, could be regarded as additive
interaction candidates.
The above soft attention establishes a connection between
every pair of features, while the interaction on tabular data is
mostly sparse (Agarwal et al. 2021; Radenovic, Dubey, and
Mahajan 2022). Inspired, we further revise Eq. (3) into hard
attention. That is, we retrieve and retain the top-k highest entries in each row of QK _[T]_ while masking other entries with
a large negative constant, i.e., becoming 0 after softmax. As
such, each feature could only interact with k features and
it suffices to choose a small integer for hyperparameter k,
being independent with the number N of input features, to
ensure sparse interaction.
Similarly, we could obtain the multiplicative stream output O[M] _∈_ R[N] _[×][d]_ by using the log-scaled Xlog, another set
of trainable parameters, and an additional exponential operator. The central idea of our AMFormer is to provide complete arithmetic ability in every single layer. To achieve the
goal, we further concatenate the two outputs O[A] and O[M]
and apply an FC layer along the candidate dimension:
_O = FC(VConcat([O[A], O[M]_ ])[T] )[T] _._ (4)
Note that VConcat([O[A], O[M] ]) performs vertical concatenation, resulting in a shape of 2N × d, FC is a fullyconnected layer that also decreases the number of dimension
-----
Gather
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|G|Col12|Col13|Col14|Col15|Col16|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||
||||Index|||||||||||||
|||||||||||||||||
@ MatMul Similarity Map
C Concatenate Top-K tokens
ICG Interaction Candidate Generator
Output Probability
L× Concatenate + Down-Dim-FC Gather
Add
Top-k
Feed Exponential
Forward
MatMul
Weighted Sum Weighted Sum
Add
P K V
Arithmetic ICG ICG @
Block
ReLU+Log
C
P Prompt Token @ MatMul
Lookup Table FC Layer K Key C Concatenate
Categorical Inputs Numerical Inputs V Value ICG
Figure 2: The overview of AMFormer. L is the layer number.
from 2N to N, and the two transposition operators assure
the fusion of interaction candidates. To summarize, the parallel attention in the arithmetic block first extracts additive
and multiplicative interaction candidates independently, and
the candidates are subsequently mixed to facilitate complete
arithmetic feature interaction in a single layer of AMFormer.
**4.2** **Optimization with Prompt Tokens**
with a few hundred features. In cases of larger datasets, it is
suggested to start with Np at 256 or 512 and then reduce this
number by half for each ensuing layer.
**5** **Experiments**
In this section, we further evaluate our AMFormer in realworld scenarios. Using four datasets, we conduct extensive
experiments to evaluate: (i) the overall effectiveness of AMFormer compared with other competitive methods, (ii) the
rationale of each building component in AMFormer, (iii) the
impacts of the prompt-based optimization in effectiveness
and scalability, and (iv) the sensitivity to key parameters.
**5.1** **Experimental Setup**
**Datasets. We choose four real-world datasets for evaluation.**
- EP (Epsilon) is a simulated physics dataset from (PASCAL 2008), which uses 2,000 normalized numerical features for binary classification.
- HC (Home Credit Default Risk) uses both numerical and
categorical features to predict clients’ binary repayment
abilities, with a ratio of around 1:10 for positive to negative samples (Anna Montoya and Kotek 2018).
- CO (Covtype) is a multi-classification dataset that utilizes surrounding characteristics to predict types of trees
growing in an area (Rossi and Ahmed 2015).
The architecture described by far potentially faces a couple of practical challenges. Initially, the self-attention mechanism has a complexity that grows quadratically with the
number of features, specifically O(N [2]d), resulting in inefficiencies in time and memory for large feature sets (Yang
et al. 2023b,a). Additionally, tabular datasets often exhibit
interaction patterns that are data-invariant, e.g., those relating to domain patterns, while the current purely data-reliant
architecture has trouble capturing them. To tackle the two
issues, we have incorporated an optimization strategy that
utilizes prompt tokens within AMFormer.
The main idea is to substitute the data-dependent query
matrix Q with a set of trainable prompt token embeddings,
represented as a parameter matrix P ∈ R[N][p][×][d]. Each of
these prompt tokens is designed to facilitate the creation of
valuable additive or multiplicative interactions among features, considering up to k features at a time. The quantity
_Np of prompt tokens could be much less than the over-_
all number of features on large datasets. Consequently, the
time and memory complexities associated with AMFormer
become linear relative to the number N of input features,
thus enhancing the model’s capability to handle extensive
datasets. Additionally, this approach enables AMFormer to
inherently learn and establish patterns of feature interaction
that are consistent across different samples. The strategic
use of prompt tokens in conjunction with the top-k selection also allows AMFormer to disregard immaterial correlations in the data. This not only helps in preventing overfitting but also improves the model’s resilience against data
noise (Cheng et al. 2023; Qian et al. 2023).
In practice, we recommend fixing Np = N for datasets
- MI (MSLR-WEB10K) is a learn-to-rank dataset for
query-URL relevance ranking (Qin and Liu 2013). It contains 136 numerical features and relevance scores are
drawn from {0, 1, 2, 3, 4}.
We adopt the same learning tasks, i.e., binary/multiclassification and regression, and metrics, i.e., accuracy
(Acc), area under the ROC curve (AUC), and mean square
error (MSE), as previous studies. Table 1 summarizes the
statistics and evaluation settings of these datasets. We normalize the numerical features to have zero mean and unit
variance before feeding into models.
**Baselines. We compare AMFormer to a variety of base-**
line methods, including the competitive tree ensemble
-----
Dataset Task type Metric # Train # Valid # Test # Num. features # Cate. features
EP binary classification Acc 320,000 80,000 100,000 2,000 /
HC binary classification AUC 200,496 45,512 61,503 104 16
CO multi-classification Acc 371,847 92,962 116,203 54 /
MI regression MSE 72,3412 235,259 24,1521 136 /
Table 1: Dataset statistics and evaluation settings.
|Dataset XGBoost NODE DCN-V2 DCAP AutoInt FT-Trans.|AMF-A AMF-F|
|---|---|
|EP 87.32 (8) 89.60 (3) 88.22 (7) 89.24 (4) 88.48 (6) 89.05 (5) ↑ HC 74.59 (7) 74.93 (6) 72.34 (8) 75.63 (2) 75.01 (5) 75.07 (4) ↑ CO 96.72 (3) 92.31 (7) 90.78 (8) 96.21 (5) 92.40 (6) 96.60 (4) ↑ MI 0.5642 (3) 0.5644 (4) 0.6043 (8) 0.5753 (6) 0.5864 (7) 0.5717 (5) ↓|89.71 (2) 89.83 (1) 75.57 (3) 75.67 (1) 97.36 (1) 97.26 (2) 0.5606 (2) 0.5557 (1)|
|Rank std 5.3 2.6 5.0 1.8 7.8 0.5 4.3 1.7 6.0 0.8 4.5 0.6 ± ± ± ± ± ± ±|2.0 ± 0.8 1.3 ± 0.5|
Table 2: Performance comparison of AMFormer and known methods. The numbers inside parentheses are the ranks of performance. The best and second-best results are highlighted.
method XGBoost (Chen and Guestrin 2016), differentiable
tree model NODE (Popov, Morozov, and Babenko 2020),
transformer-based approaches AutoInt (Song et al. 2019)
and FT-Transformer (Gorishniy et al. 2021), and the recent deep cross nets DCN-V2 (Wang et al. 2021a) and
DCAP (Chen et al. 2021). Our AMFormer is a general deep
tabular learning module and we plug it into FT-Transformer
and AutoInt, leading to two variants AMF-A and AMF-F for
comparison, respectively.
**Implementation. All tested models are implemented**
with PyTorch v1.12 (Paszke et al. 2019). We use the recommended model parameters in the original papers for baseline methods and those of XGBoost follow (Gorishniy et al.
2021). Notably, AutoInt and FT-Transformer use a dimensionality of 32 and 192 for embeddings, respectively, which
are inherited in AMF-A and AMF-F. We adopt Adam with
betas=(0.9, 0.999) and eps=1e-8 for optimization. The learning rate first linearly increases to 1e-3 in the first 1k steps
and then decays by 90% every 20k steps by default, except for the HC data with an initial 1e-4 and 4k decaying
steps. The default batch size is 512, which reduces to 32
for transformer-based methods on the EP data due to GPUmemory limitation. We report the detailed hyper-parameters
of all methods in the supplement. All tests are conducted
on a machine with 104 Intel(R) Xeon(R) Platinum 8269CY
CPUs and an NVIDIA Tesla A100-SXM-40GB.
We next present our findings.
**5.2** **Comparison with Known Methods**
In the first set of tests, we evaluate the overall effectiveness
of our approach by comparing its variants with the considered baselines. The metrics on the test set are presented in
Table 2 and we conclude the following.
First, DCN-V2 performs the worst compared with other
tested approaches. Note that it utilizes MLP to capture highorder feature interaction, which turns out to be less effective
for tabular data. AutoInt only preserves the multi-head selfattention while dropping the feed-forward net and residual
connection from its transformer architecture, and its effectiveness remains not competitive either. Inspired, we keep
these operators in our AMFormer.
For the rest baselines, we observe inconsistent performance across different datasets. The top-performing methods are diverse, e.g., XGBoost on CO and MI, NODE on
EP, and DCAP on HC. This result has somehow demonstrated the challenge of identifying a unified modeling bias
for tabular data analysis. Overall, we find that DCAP and
FT-Transformer are the best among these baselines according to the average performance rank on all datasets. Both
approaches are based on the transformer architecture, indicating its potential for deep tabular learning. However, it is
worth noting that none of the deep models could outperform
XGBoost on all datasets.
Finally, the two variants of AMFormer are consistently
better than the six tested baselines on the four datasets, except for AMF-A on the HC data where it slightly underperforms compared to DCAP. This implies that AMFormer
consistently enhances the performance of the two backbone
models on all datasets. Specifically, in classification tasks,
AMFormer improves the accuracy or AUC of AutoInt and
FT-transformer by at least 0.5%, with improvements of up
to 1.23% and 4.96% observed on the EP and CO data for
AutoInt. In regression tasks, the MSE of the two backbone models decreases by more than 0.016. The effectiveness and stability of AMFormer result in significantly better
performance rankings for AMF-A and AMF-F compared to
all existing approaches, marking a significant milestone for
deep learning on tabular data. From our perspective, we believe that AMFormer effectively addresses the question of
whether deep learning is necessary for tabular data.
**5.3** **Ablation Study**
We next conduct an ablation study to evaluate the impacts of
the building component within our AMFormer, i.e., the additive attention, the multiplicative attention, and the promptbased optimization. Similarly, we consider two backbone
models and evaluate the test metrics by using different combinations of the components on the EP and MI datasets. Note
that EP has the largest number of features and MI is the only
regression dataset. The results are reported in Table 3 and
-----
Backbone Add. Multi. Prompt EP ↑ MI ↓
✓ - - 88.48 0.5864
✓ - ✓ 89.61 0.5638
- ✓ - 89.53 0.5748
AutoInt
- ✓ ✓ 89.65 0.5631
✓ ✓ - 89.55 0.5639
✓ ✓ ✓ **89.71** **0.5606**
✓ - - 89.05 0.5717
✓ - ✓ 89.80 0.5633
- ✓ - 50.05 0.5624
FT-Trans.
- ✓ ✓ 50.05 0.5605
✓ ✓ - 89.58 0.5585
✓ ✓ ✓ **89.83** **0.5557**
Table 3: Results of ablation study on EP and MI.
97.3
97.1
96.9
97
96
95
89.8
89.7
89.9
89.8
89.7
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
||||||EP|
||||||CO|
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
||||EP|
||||CO|
2 4 6 8 10
Top-K for multiplication
(b) Parameter k
Layer Number (L)
(a) Layer number L
Figure 3: Impacts of layer number L and parameter k.
_NP_ 32 64 128 256 512 1024
EP 89.75 89.79 89.77 89.81 **89.83** 89.76
Table 5: Impact of the number NP of prompt tokens.
achieves approximately 7 times faster training speed, i.e.,
only around 3 hours of training time. Moreover, with the
same batchsize, our AMFormer needs only 17 GB GPUmemory while its counterparts take up at least 33 GB GPUmemory. Less memory usage makes it possible to train and
employ our AMFormer on less-powerful devices.
Method Prompt Train (hr) GFLOPS GPU-M
AutoInt - 16.81 7.2 33 GB
FT-Trans - 19.64 9.7 34 GB
AMF-A - 25.77 12.3 36 GB
AMF-F - 20.88 11.3 38 GB
AMF-A ✓ 2.93 1.3 17 GB
AMF-F ✓ 3.11 0.7 18 GB
AMF-A % ✓ 11.37% 10.45% 51.51%
AMF-F % ✓ 14.89% 6.16% 52.94%
Table 4: Scalability evaluation of prompt-based optimization
**5.4** **Parameter Sensitivity**
Finally, we evaluate the parameter sensitivity of AMFormer.
To examine the impact of L, we vary L from 1 to 6, fix
the other parameters to their default values, and test the Acc
results on the EP and CO dataset, as shown in Fig. 3a. When
increasing L, the Acc first increases and then decreases on
EP while it continues increasing on CO. The best Acc is
attained at L = 3 for EP and L = 6 for CO. Note that
the accuracy metric on the CO dataset starts to plateau at
_L = 4. Overall, we believe that approximately 3 ∼_ 4 layers
are sufficient for feature extraction.
When dealing with large datasets, we set the number of
prompt tokens as NP in the first layer, and then reduce this
number by half for each subsequent layer. As shown in Table 5, on the large EP data, a small NP (< 256) results in
slightly worse performance caused by insufficient learning,
while the performance tends to saturate around 256. However, increasing NP further leads to a decrease in performance, because large NP tends to cause redundancy and
overfitting. Although 512 tokens exhibit better performance
than 256, the trade-off between performance and efficiency
makes 256 a balanced choice.
We further present the sensitivity results of parameter k,
_i.e., top-k, for forming feature interaction. The results on EP_
and CO are shown in Fig. 3b, from which we find that the
Acc increases first from 2 to 8 and then decreases slightly.
Overall, large k is more suitable for AMFormer. This is because the impacts of less relevant features are decreased by
the affinity weights. However, there is a risk of overfitting
in practical training and larger k can then have a negative
impact. We thus recommend k = 8 by default.
we find the following.
First, we find that multiplicative interaction is important,
and using multiplicative attention alone could obtain better
results than using classic additive attention alone in three
of the four testing cases (row 1 vs. row 3), except for using FT-Transformer as the base model on EP. Moreover,
using parallel additive and multiplicative attention consistently improves the effectiveness in all scenarios (row 5 vs.
1&3), indicating the usefulness of arithmetic feature interaction for tabular data analysis. In addition, our prompt-based
optimization also leads to consistent performance improvement (rows 1&3&5 vs. 2&4&6, respectively). This is because prompts could stabilize the interaction patterns which
do not vary with examples and this is very important for tabular data. Moreover, when dealing with a large number of
features, a small number of prompts could reduce irrelevant
interactions in redundant features.
The complexity of attention has always been a very
efficiency-critical part. With an O(N [2]) complexity to the
number N of tokens/features, the computational cost will increase quadratically with the increase of token number, and
the attention map will also occupy a lot of GPU memory during training. From Table 4 we find that the overall training
time for FT-Transformer and AutoInt exceeds 15 hours. For
our AMFormer without prompts, the training time even exceeds 20 hours. Our AMFormer incorporates the additional
multiplicative attention, which requires more FLOPs and results in longer training time. After applying the prompt optimization to limit feature interaction, our AMFormer reduces
computational resource by approximately 90% and 94% and
**6** **Conclusion**
This paper studied the effective inductive bias of deep models on tabular data. We hypothesized that arithmetic feature
-----
interaction is necessary for tabular deep learning and integrated this idea in the transformer architecture to derive
AMFormer. We verified the effectiveness of AMFormer on
both synthetic and real-world data. The results of our synthetic data demonstrated its better capacity for fine-grained
tabular data modeling, data efficiency in training, and generalization. Moreover, extensive experiments on real-world
data further confirmed its consistent effectiveness, the rationale behind each building block, and the scalability to handle
large-scale data. We thus believe that AMFormer has established a strong inductive bias for deep tabular learning.
**Acknowledgments**
This research was partially supported by National Natural
Science Foundation of China under grants No. 62106218
and No. 62132017, Zhejiang Key R&D Program of China
under grant No. 2023C03053.
**References**
Agarwal, R.; Melnick, L.; Frosst, N.; Zhang, X.; Lengerich,
B. J.; Caruana, R.; and Hinton, G. E. 2021. Neural Additive Models: Interpretable Machine Learning with Neural
Nets. In Advances in Neural Information Processing Sys_tems, 4699–4711._
Anna Montoya, K., inversion; and Kotek, M. 2018. Home
Credit Default Risk. url: https://kaggle.com/competitions/
home-credit-default-risk.
Beltagy, I.; Peters, M. E.; and Cohan, A. 2020. Longformer:
The Long-Document Transformer. CoRR, abs/2004.05150.
Bishop, C. M. 2006. _Pattern Recognition and Machine_
_Learning (Information Science and Statistics). Berlin, Hei-_
delberg: Springer-Verlag. ISBN 0387310738.
Chen, J.; Liao, K.; Fang, Y.; Chen, D. Z.; and Wu, J. 2023a.
TabCaps: A Capsule Neural Network for Tabular Data Classification with BoW Routing. In The Eleventh International
_Conference on Learning Representations, ICLR 2023, Ki-_
_gali, Rwanda, May 1-5, 2023. OpenReview.net._
Chen, J.; Liao, K.; Wan, Y.; Chen, D. Z.; and Wu, J. 2022.
DANets: Deep Abstract Networks for Tabular Data Classification and Regression. In Thirty-Sixth AAAI Conference on
_Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference_
_on Innovative Applications of Artificial Intelligence, IAAI_
_2022, The Twelveth Symposium on Educational Advances in_
_Artificial Intelligence, EAAI 2022 Virtual Event, February_
_22 - March 1, 2022, 3930–3938. AAAI Press._
Chen, J.; Yan, J.; Chen, D. Z.; and Wu, J. 2023b. ExcelFormer: A Neural Network Surpassing GBDTs on Tabular Data. CoRR, abs/2301.02819.
Chen, T.; and Guestrin, C. 2016. XGBoost: A Scalable
Tree Boosting System. In Proceedings of the 22nd ACM
_SIGKDD International Conference on Knowledge Discov-_
_ery and Data Mining, 785–794. ACM._
Chen, Z.; Zhong, F.; Chen, Z.; Zhang, X.; Pless, R.; and
Cheng, X. 2021. DCAP: Deep Cross Attentional Product
Network for User Response Prediction. In The 30th ACM
_International Conference on Information and Knowledge_
_Management, 221–230. ACM._
Cheng, H.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.;
Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.;
Anil, R.; Haque, Z.; Hong, L.; Jain, V.; Liu, X.; and Shah, H.
2016. Wide & Deep Learning for Recommender Systems.
In Proceedings of the 1st Workshop on Deep Learning for
_Recommender Systems, 7–10. ACM._
Cheng, M.; Gao, Y.; Liu, G.; Jin, H.; and Zhang, X.
2022. EasyRec: An easy-to-use, extendable and efficient
framework for building industrial recommendation systems.
_ArXiv, abs/2209.12766._
Cheng, Y.; Ying, H.; Hu, R.; Wang, J.; Zheng, W.; Zhang,
X.; Chen, D.; and Wu, J. 2023. Robust Image Ordinal Regression with Controllable Image Generation. In Proceed_ings of the Thirty-Second International Joint Conference on_
_Artificial Intelligence, IJCAI-23, 627–635._
Child, R.; Gray, S.; Radford, A.; and Sutskever, I. 2019.
Generating Long Sequences with Sparse Transformers.
_CoRR, abs/1904.10509._
Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn,
D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.;
Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2020.
An Image is Worth 16x16 Words: Transformers for Image
Recognition at Scale. CoRR, abs/2010.11929.
Enouen, J.; and Liu, Y. 2022. Sparse Interaction Additive
Networks via Feature Interaction Detection and Sparse Selection. In NeurIPS.
Goldberger, A. L.; Amaral, L. A. N.; Glass, L.; Hausdorff, J. M.; Ivanov, P. C.; Mark, R. G.; Mietus, J. E.;
Moody, G. B.; Peng, C.-K.; and Stanley, H. E. 2000. PhysioBank, PhysioToolkit, and PhysioNet: Components of a
New Research Resource for Complex Physiologic Signals.
_Circulation, 101(23): e215–e220._ Circulation Electronic
Pages: http://circ.ahajournals.org/content/101/23/e215.full
PMID:1085218; doi: 10.1161/01.CIR.101.23.e215.
Gorishniy, Y.; Rubachev, I.; Khrulkov, V.; and Babenko,
A. 2021. Revisiting Deep Learning Models for Tabular
Data. In Advances in Neural Information Processing Sys_tems, 18932–18943._
Guo, H.; Tang, R.; Ye, Y.; Li, Z.; and He, X. 2017. DeepFM:
A Factorization-Machine based Neural Network for CTR
Prediction. In Proceedings of the Twenty-Sixth International
_Joint Conference on Artificial Intelligence, 1725–1731. ij-_
cai.org.
Harper, F. M.; and Konstan, J. A. 2016. The MovieLens
Datasets: History and Context. ACM Trans. Interact. Intell.
_Syst., 5(4): 19:1–19:19._
Hastie, T. J.; and Tibshirani, R. J. 1986. Generalized Additive Models. Statistical Science, 1(3): 297–310.
Jeffares, A.; Liu, T.; Crabb´e, J.; Imrie, F.; and van der Schaar,
M. 2023. TANGOS: Regularizing Tabular Neural Networks
through Gradient Orthogonalization and Specialization. In
_The Eleventh International Conference on Learning Rep-_
_resentations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023._
OpenReview.net.
Johnson, A. E. W.; Pollard, T. J.; Shen, L.; wei H. Lehman,
L.; Feng, M.; Ghassemi, M. M.; Moody, B.; Szolovits, P.;
-----
Celi, L. A.; and Mark, R. G. 2016. MIMIC-III, a freely
accessible critical care database. Scientific Data, 3.
Kadra, A.; Lindauer, M.; Hutter, F.; and Grabocka, J. 2021.
Well-tuned Simple Nets Excel on Tabular Datasets. In Ad_vances in Neural Information Processing Systems, 23928–_
23941.
Katzir, L.; Elidan, G.; and El-Yaniv, R. 2021. Net-DNF: Effective Deep Modeling of Tabular Data. In 9th International
_Conference on Learning Representations, ICLR. OpenRe-_
view.net.
Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.;
Ye, Q.; and Liu, T. 2017. LightGBM: A Highly Efficient
Gradient Boosting Decision Tree. In Advances in Neural
_Information Processing Systems, 3146–3154._
Lian, J.; Zhou, X.; Zhang, F.; Chen, Z.; Xie, X.; and Sun, G.
2018. xDeepFM: Combining Explicit and Implicit Feature
Interactions for Recommender Systems. In Proceedings of
_the 24th ACM SIGKDD International Conference on Knowl-_
_edge Discovery & Data Mining, 1754–1763. ACM._
Lou, Y.; Caruana, R.; and Gehrke, J. 2012. Intelligible Models for Classification and Regression. In Proceedings of
_the 18th ACM SIGKDD International Conference on Knowl-_
_edge Discovery and Data Mining, KDD ’12, 150–158._
Lou, Y.; Caruana, R.; Gehrke, J.; and Hooker, G. 2013. Accurate Intelligible Models with Pairwise Interactions. In
_Proceedings of the 19th ACM SIGKDD International Con-_
_ference on Knowledge Discovery and Data Mining, KDD_
’13, 623–631.
Micci-Barreca, D. 2001. A Preprocessing Scheme for HighCardinality Categorical Attributes in Classification and Prediction Problems. SIGKDD Explor., 3(1): 27–32.
Moro, S.; Cortez, P.; and Laureano, R. 2011. Using Data
Mining for Bank Direct Marketing: An Application of the
CRISP-DM Methodology. In Proceedings of the European
_Simulation and Modelling Conference._
Mortoglou, A.; and Candiloros, H. 2004. The serum triiodothyronine to thyroxine (T3/T4) ratio in various thyroid disorders and after Levothyroxine replacement therapy.
_Hormones (Athens, Greece), 3(2): 120–126._
PASCAL. 2008. Epsilon Data. url: https://www.csie.ntu.
edu.tw/[∼]cjlin/libsvmtools/datasets/binary.html#epsilon.
Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.;
Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.;
Desmaison, A.; Kopf, A.; Yang, E.; DeVito, Z.; Raison, M.;
Tejani, A.; Chilamkurthy, S.; Steiner, B.; Fang, L.; Bai, J.;
and Chintala, S. 2019. PyTorch: An Imperative Style, HighPerformance Deep Learning Library. In Advances in Neural
_Information Processing Systems, 8024–8035. Curran Asso-_
ciates, Inc.
Popov, S.; Morozov, S.; and Babenko, A. 2020. Neural
Oblivious Decision Ensembles for Deep Learning on Tabular Data. In 8th International Conference on Learning Rep_resentations, ICLR. OpenReview.net._
Prokhorenkova, L. O.; Gusev, G.; Vorobev, A.; Dorogush,
A. V.; and Gulin, A. 2018. CatBoost: unbiased boosting
with categorical features. In Advances in Neural Informa_tion Processing Systems, 6639–6649._
Qian, S.; Ying, H.; Hu, R.; Zhou, J.; Chen, J.; Chen, D. Z.;
and Wu, J. 2023. Robust Training of Graph Neural Networks
via Noise Governance. In Proceedings of the Sixteenth ACM
_International Conference on Web Search and Data Mining,_
607–615.
Qin, T.; and Liu, T. 2013. Introducing LETOR 4.0 Datasets.
_CoRR, abs/1306.2597._
Qin, Z.; Yan, L.; Zhuang, H.; Tay, Y.; Pasumarthi, R. K.;
Wang, X.; Bendersky, M.; and Najork, M. 2021. Are Neural
Rankers still Outperformed by Gradient Boosted Decision
Trees? In 9th International Conference on Learning Repre_sentations, ICLR. OpenReview.net._
Radenovic, F.; Dubey, A.; and Mahajan, D. 2022. Neural
Basis Models for Interpretability. In NeurIPS.
Rendle, S. 2010. Factorization Machines. In The 10th IEEE
_International Conference on Data Mining, 995–1000. IEEE_
Computer Society.
Rossi, R. A.; and Ahmed, N. K. 2015. The Network Data
Repository with Interactive Graph Analytics and Visualization. In Proceedings of the Twenty-Ninth AAAI Conference
_on Artificial Intelligence, 4292–4293. AAAI Press._
Shen, Z.; Zhang, M.; Zhao, H.; Yi, S.; and Li, H. 2021. Efficient Attention: Attention with Linear Complexities. In
_IEEE Winter Conference on Applications of Computer Vi-_
_sion, 3530–3538. IEEE._
Song, W.; Shi, C.; Xiao, Z.; Duan, Z.; Xu, Y.; Zhang, M.;
and Tang, J. 2019. AutoInt: Automatic Feature Interaction
Learning via Self-Attentive Neural Networks. In Proceed_ings of the 28th ACM International Conference on Informa-_
_tion and Knowledge Management, 1161–1170. ACM._
Trask, A.; Hill, F.; Reed, S. E.; Rae, J.; Dyer, C.; and Blunsom, P. 2018. Neural Arithmetic Logic Units. In Advances
_in Neural Information Processing Systems, volume 31._
Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones,
L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. In Advances in Neural Information
_Processing Systems, 5998–6008._
Wang, R.; Shivanna, R.; Cheng, D. Z.; Jain, S.; Lin, D.;
Hong, L.; and Chi, E. H. 2021a. DCN V2: Improved Deep &
Cross Network and Practical Lessons for Web-scale Learning to Rank Systems. In The Web Conference, 1785–1797.
ACM / IW3C2.
Wang, S.; Li, B. Z.; Khabsa, M.; Fang, H.; and Ma, H. 2020.
Linformer: Self-Attention with Linear Complexity. CoRR,
abs/2006.04768.
Wang, W.; Xie, E.; Li, X.; Fan, D.; Song, K.; Liang, D.;
Lu, T.; Luo, P.; and Shao, L. 2021b. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without
Convolutions. In 2021 IEEE/CVF International Conference
_on Computer Vision, 548–558. IEEE._
Yan, J.; Chen, J.; Wu, Y.; Chen, D. Z.; and Wu, J. 2023.
T2G-FORMER: Organizing Tabular Features into Relation
Graphs Promotes Heterogeneous Feature Interaction. In
Williams, B.; Chen, Y.; and Neville, J., eds., Thirty-Seventh
_AAAI Conference on Artificial Intelligence, AAAI 2023,_
-----
_Thirty-Fifth Conference on Innovative Applications of Ar-_
_tificial Intelligence, IAAI 2023, Thirteenth Symposium on_
_Educational Advances in Artificial Intelligence, EAAI 2023,_
_Washington, DC, USA, February 7-14, 2023, 10720–10728._
AAAI Press.
Yang, Y.; Sun, Z.; Zhu, H.; Fu, Y.; Zhou, Y.; Xiong, H.; and
Yang, J. 2023a. Learning Adaptive Embedding Considering
Incremental Class. IEEE Trans. Knowl. Data Eng., 35(3):
2736–2749.
Yang, Y.; Zhou, D.; Zhan, D.; Xiong, H.; Jiang, Y.; and Yang,
J. 2023b. Cost-Effective Incremental Deep Model: Matching Model Capacity With the Least Sampling. IEEE Trans.
_Knowl. Data Eng., 35(4): 3575–3588._
Yu, Y.; Buchanan, S.; Pai, D.; Chu, T.; Wu, Z.; Tong, S.;
Haeffele, B. D.; and Ma, Y. 2023. White-Box Transformers
via Sparse Rate Reduction. CoRR, abs/2306.01129.
Yuan, G.-X.; Ho, C.-H.; and Lin, C.-J. 2011. An Improved
GLMNET for L1-Regularized Logistic Regression. In Pro_ceedings of the 17th ACM SIGKDD International Confer-_
_ence on Knowledge Discovery and Data Mining, 33–41._
-----
dimension 1024
# Layer 2
choice-function entmax15
bin-function entmoid15
Table 6: Default Settings for NODE.
# Layers 3
dimension 32 for AutoInt, 192 otherwise
heads 8
FF-dropout 0.1
Attention-dropout 0.2
Table 7: Default Settings for Transformer-based methods,
_i.e., FT-Transformer, DCAP and AutoInt._
layer-num 2
embedding-size 16
dnn-hidden-units (562, 562, 562)
init-std 0.0001
l2-reg 0.00001
drop-rate 0.5
Table 8: Default Settings for DCN-V2.
booster ”gbtree”
early-stopping-rounds 50
n-estimators 2000
Table 9: Default Settings for XGBoost.
which uses the default hyper-parameter settings presented
in Table 6. As shown Table 7, Transformer-based methods
share most of the hyper-parameters, except for the feature
dimension. AutoInt uses a feature dimension of 32, while
FT-Transformer and DCAP use a feature dimension of 192.
The hyper-parameters for DCN-V2 are listed in Table 8.It
should be noted that the embedding feature within DCN-V2
is calculated as the sum of the number of numerical features
and the product of the number of categorical features and
the embedding size. Additionally, the default hyperparameter settings for XGBoost can be found in Table 9.
76
75.5
75
74.5
0.565
0.56
0.555
|MI HC|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
Figure 4: Impact of layer number L on MI and HC.
|Col1|MI HC|
|---|---|
0.559
75.6
75.4
|Col1|MI HC|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
MI
HC
10
0.557
0.555
Figure 5: Parameter K on MI and HC.
**Appendix A: Extra Results on Parameter**
**Sensitivity**
We also investigate the effect of the parameter L on the
MI and HC datasets, while keeping the default settings for
other parameters. The MSE (↓) and ACC (↑) results are illustrated in Fig. 4. Similar to the performance on the EP and
CO datasets, as the number of layers increases, the overall
performance of our AMFormer initially improves and then
starts to decline. The optimal performance is achieved on
both datasets when L = 3.
Considering the results obtained from all four datasets, it
can be concluded that L = 3 is a parameter that universally
applies across different datasets.
With the optimal number of layers L, we further explore
the impact of different values of K on these two datasets. As
depicted in Fig. 5, the ACC on the HC dataset demonstrates
a rapid increase before K = 6 followed by a slight decline,
while the MSE on the MI dataset continues to decrease and
stabilizes after K = 8. These findings suggest that a larger
value of K can enhance the performance of our AMFormer,
but increasing K beyond a certain threshold may lead to a
slight decline in performance.
**Appendix B: Default Settings for Each Method**
In this section, we provide the implementation details for all
models. We utilize the official implementation for NODE [1],
1https://github.com/Qwicen/node
-----
| [
"Yi, Cheng",
"Renjun, Hu",
"Haochao, Ying",
"Xing, Shi",
"Jian, Wu",
"Wei, Lin"
] | 2024-03-19T00:00:00 | AAAI 2024 Machine Learning | false | 0 | 0 | null | http://arxiv.org/abs/2402.02334 | https://arxiv.org/abs/2402.02334 | https://www.semanticscholar.org/paper/4c58009a011a98f505e641a1a8873fba008f03be |
Arithmetical Mini-Games | N/A | null | null | [
"Gauthier, Thibault"
] | 2019-01-01T00:00:00 | null | false | 0 | 0 | null | http://aitp-conference.org/2019/abstract/paper%2021.pdf | null | null |
Artificial intelligence and machine learning generated conjectures with TxGraffiti | \emph{TxGraffiti} is a machine learning and heuristic based artificial intelligence designed to automate the task of conjecturing in mathematics. Since its inception, TxGraffiti has generated many surprising conjectures leading to publication in respectable mathematical journals. In this paper we outline the machine learning and heuristic techniques implemented by TxGraffiti. We also recall its contributions to the mathematical literature and announce a new online version of the program available for anyone curious to explore conjectures in graph theory. | null | ## Artificial intelligence and machine learning generated conjectures with TxGraffiti
1,2Randy Davila
1Research and Development
RelationalAI
Berkeley, CA 94704, USA
```
Email: [email protected]
```
2Department of Computational Applied
Mathematics & Operations Research
Rice University
Houston, TX 77005, USA
```
Email: [email protected]
```
**Abstract**
_TxGraffiti is a machine learning and heuristic based artificial intelligence de-_
signed to automate the task of conjecturing in mathematics. Since its inception,
TxGraffiti has generated many surprising conjectures leading to publication in respectable mathematical journals. In this paper we outline the machine learning and
heuristic techniques implemented by TxGraffiti. We also recall its contributions to
the mathematical literature and announce a new online version of the program
available for anyone curious to explore conjectures in graph theory.
**Keywords: Automated conjecturing; machine learned conjecturing; TxGraffiti.**
**AMS subject classification: 05C69**
### 1 Introduction
The ability of carefully designed computer programs to generate meaningful mathematical conjectures has been demonstrated since the late 1980s, notably by Fajtlowicz’s GRAFFITI program [23]. Indeed, this heuristic-based program was the first
artificial intelligence to make significant conjectures in matrices, number theory,
and graph theory, attracting the attention of renowned mathematicians like Paul
Erd˝os, Ronald Graham, and Odile Favaron. Inspired by the pioneering work of
Fajtlowicz, and by interactions with mathematicians who considered conjectures of
GRAFFITI, we developed the TxGraffiti program, a modern conjecturing artificial
intelligence named in homage to this rich history of conjectures made by GRAF[FITI and now available as an interactive website. While our program TxGraffiti](https://txgraffiti.streamlit.app)
-----
draws inspiration from GRAFFITI and its successor Graffiti.pc by DeLaVi˜na [19],
it was developed independently and features several distinct design elements and
conjecturing capabilities, which we detail in this paper.
When discussing computer-assisted conjecturing, we remark that it is easy for a
computer to generate many plausible conjectures. For example, one might gather
a set of mathematical objects and test various functions applied to these objects
to identify potential relationships (inequalities). If a relationship holds across all
objects in the database, it becomes a plausible conjecture. For instance, given a
database of graphs and the ability to compute various parameters on said graphs,
a computer might quickly discover the relation:
_α(G) ≤_ _n(G),_ (1)
where α(G) is the independence number (the cardinality of a maximum set of pairwise non-adjacent vertices in G) and n(G) is the order (the number of vertices in
_G). A more refined bound for α(G) in nontrivial, connected graphs is:_
_α(G) ≤_ _n(G) −_ 1. (2)
Both inequalities 1 and 2 hold under specific conditions, and TxGraffiti is designed
to consider various hypotheses to form such conjectures. The mechanism for considering different hypotheses is a heuristic called Theo, detailed in Section 3.
To discover relationships like inequalities 1 and 2, TxGraffiti employs a machine
learning, data-driven approach using linear optimization methods. This approach
allows the program to find optimal parameters m, b ∈ R for example conjecturing
on α(G) in terms of another graph invariant, say i(G), presenting conjectures in
the form:
**Conjecture 1. If G satisfies certain boolean conditions, then**
_α(G) ≤_ _m · i(G) + b,_
_where this bound is sharp._
By integrating machine learning techniques with the Theo and Dalmation heuristics (see Section 2), TxGraffiti generates novel conjectures suitable for publication
in mathematical journals. In Section 2, we discuss the historical development of
AI-assisted conjecturing and relevant techniques. Section 3 details TxGraffiti’s implementation, Section 4 presents conjectures produced by TxGraffiti that have led
to mathematical publications, and Section 5 provides concluding remarks.
### 2 Related Work
In 1948, Turing proposed that intelligent machines could be a significant asset
in mathematical research, requiring substantial intelligence while involving “min_imal interaction with the external world” [33]. Following this vision, early work_
in computer-assisted mathematics includes Newell and Simon’s Logic Theorist program developed in the 1950s. This program was capable of proving some theorems
in first-order logic, and they boldly predicted that a computer would eventually discover and prove a crucial mathematical theorem [32]. This program, among others,
focused heavily on theorem proving, including a notable achievement in 1996 being
the computer proof of the Robbins Conjecture [30].
-----
Artificial intelligent conjecture-making is the other side of computer-assisted
mathematics and began with Wang’s work in the late 1950s [34]. Wang’s Program
_II generated numerous statements in propositional logic, which could be considered_
conjectures or potential theorems. Despite the program’s innovative approach, it
struggled to filter the vast number of generated statements to identify those of
significant interest, highlighting a key challenge in automated conjecture-making.
Indeed, as mentioned prior, a computer may easily generate thousands of plausible
relationships on a given database of mathematical objects.
A breakthrough in computer assisted conjecture-making was achieved with Fajtlowicz’s GRAFITTI program, the first to produce conjectures that led to published
mathematical research [20, 23]. Early versions of Graffiti faced the ”Sorcerer’s Apprentice Problem,” where the challenge was to manage the overwhelming number of
generated conjectures. This issue was mitigated by Fajtlowicz’s Dalmatian heuristic, which limited both the quantity and ensured the quality of the output conjectures [24]. This heuristic ensured that each conjecture was significant concerning at
least one object in the program’s database, and by doing so, also removed any new
potential conjecture that followed by transitivity from another potential conjecture.
GRAFITTI and its successor, Graffiti.pc, developed by DeLaVina, have been
instrumental in advancing computer assisted conjecture-making and have resulted
in numerous publications [19]. These programs, although not widely distributed,
have paved the way for modern automated conjecturing systems. TxGraffiti, while
inspired by these programs, was developed independently and incorporates unique
design elements and conjecturing capabilities. Namely, TxGraffiti employs a machine learning, data-driven approach using linear optimization methods to generate
plausible conjectured inequalities, thereafter, implementing two heuristics for filtering conjectures found; details to be described in Section 3.
Other notable programs in the domain of automated conjecturing include Lenat’s
_AM [25, 26, 27], Epstein’s GT [21, 22], Colton’s HR [9, 10, 11], Hansen and Ca-_
porossi’s AGX [3, 4], and M´elot’s Graphedron [31]. These programs have significantly contributed to various mathematical domains and have demonstrated the
potential of automated systems in aiding mathematical discovery.
### 3 Methodology
In designing a computer program that generates mathematical conjectures, the
first requirement is a database of mathematical objects. In the case of TxGraffiti,
these objects are edge lists of simple connected graphs. It is crucial to underscore
the importance of data quality. An extensive database is optional for the computer to identify non-trivial relationships among object properties. Instead, what
is needed is a collection of unique instances of the objects in question, such as special counter-examples or interesting families of graphs from the literature. In our
implementations, we utilized databases of several hundred objects, though we have
also experimented with thousands with little to no meaningful returns in conjecture
quality.
**3.1** **Feature Generation**
After a collection of mathematical objects is collected, the next step in the design
of TxGraffiti is to generate a table (a csv file representing a database) of various
precomputed functions on the objects in this database. Our framework mandates
-----
that at least two of these functions return numerical values (for pairwise comparison), while others can return numerical or Boolean values. See Figure 1 for an
illustration of this process; numerical properties are denoted by Pi, and Boolean
properties are denoted by Hi.
Figure 1: A mapping of a collection of N mathematical objects to a
table of numerical and Boolean properties.
Indeed, once a table of data like the one is available, TxGraffiti may conjecture
on the data; regardless if the data represents graph object data. Thus, if one were
to generate conjectures on different types of data, one would first create such a
table and then implement the steps in the following subsections.
**3.2** **Inequality Generation**
In this section, we propose and implement a simplified version of the following steps
for a computer program to generate inequalities relating properties of the objects
under consideration, with an emphasis on conjecture simplicity and on conjecture
strength.
1. Select a target property Pi – a precomputed and numerically valued function
on the objects in the database.
2. Choose an inequality direction (upper or lower) to bound the property Pi.
3. For each precomputed numerical function Pj, with j = i, use a supervised
_̸_
machine learning technique or linear program to find a function f such that
_Pi(_ ) _f_ ((Pj( )) holds for each object in the database, and the number
_O_ _≤_ _O_ _O_
of instances where the inequality is an equality is maximized.
4. If Pi( ) = f ((Pj( )) for all objects in the database, disregard f as a
_O_ _̸_ _O_ _O_
conjectured upper (or lower) bound on Pi. Otherwise, f is called a sharp
_bounding function; store f_ (Pj) as a conjectured upper (or lower) bound on Pi
and record the set of objects where Pi( ) = f (Pj( )); the size of this set
_O_ _O_ _O_
is the touch number of the conjecture.
TxGraffiti follows these steps at each instance of a user requesting a desired conjecture, with conjectured upper and lower bounds computed automatically through
linear programming formulations. For example, consider producing one conjectured
upper bound on Pi( ) in terms of another numerical valued function Pj( )). This
_O_ _O_
is achieved by solving a linear optimization problem, and in the simplest case,
TxGraffiti aims to minimize a some linear function f (m, b) subject to a set of constraints.
minimize _f_ (m, b)
_m,b_
subject to _Pi(_ ) _mPj(_ ) + b, Database,
_O_ _≤_ _O_ _∀O ∈_
-----
Target Pi
Sharp bounding function
**CONJECTURE:**
If is an object, then Pi( ) _mPj(_ ) + b,
_O_ _O_ _≤_ _O_
_Pj≠_ _i_
_y = mP_
If O is an object, then
and this bound is sharp.
Figure 2: Finding a possible (linear) upper bound on the target
property Pi in terms of property Pj.
The goal is to find the line with the slope m and y-intercept b that satisfies all
the inequalities and maximizes the number of times these inequalities hold with
equality. In this way, TxGraffiti searches for the best linear upper bound on Pi( )
_O_
in terms of Pj( ) that holds for all objects in the database given some hypothesis
_O_ _O_
for the objects to satisfy; see Figure 2 for a graphical illustration of the linear upper
bound.
The above process is done on each numerical column of the feature data; that
is, each pair of numerical functions is compared against each other and given a
proposed inequality conjectured between them. Each conjecture generated by the
above steps applies to all types of objects in the database. However, we can generate
even more conjectures. By applying the same steps to a subset of objects in the
database that satisfy a particular Boolean property (or combination of Boolean
properties), we may obtain conjectures that are less general but potentially stronger;
see Figure 3 for an illustration of this process.
**3.3** **Sorting and Filtering**
Once the optimal bounding functions for a given target invariant are found, we are
left with a list of possible conjectures, along with detailed data for each conjecture.
This data includes the set of objects that satisfy the conjecture’s hypothesis, the
graphs that attain equality, and the count of these graphs. At this point, our program implements a sorting procedure. That is, the list of conjectures is then sorted
in nonincreasing order with respect to the touch number of the conjectures. Thus,
the conjectures at the top of the list hold with equality more than the conjectures
towards the bottom of the list. Thus, this sorting aspect implemented by our program insures the conjectures at the top of the list are “stronger” than those at the
bottom of the list.
After the conjectures have been sorted according to their respective touch numbers, our program then implements the first of two filtering heuristics, called Theo.
-----
Target Pi
Sharp bounding function
**CONJECTURE:**
If is an object and _H1, then_
_O_ _O ∈_
_Pi(_ ) _mPj(_ ) + b,
_O_ _≤_ _O_
and this bound is sharp.
_Pj≠_ _i_
_y = mP_
If O
_Pi(_ )
_O_ _≤_
Figure 3: Finding a possible (linear) upper bound on the target
property Pi in terms of property Pj for objects in H1. Data points
associated with containment in H1 shown by orange dots, whereas
instances no in H1 shown by green dots.
This heuristic checks if any proposed inequality relation appears more than once
in the list of conjectures, and then only selects the proposed conjectures with this
inequality that have the most general hypothesis statement. For example, consider
the following two conjectures.
**Conjecture 2. If G is a connected and cubic graph, then**
_α(G) ≤_ _µ(G),_
_and this bound is sharp._
**Conjecture 3. If G is a connected and r-regular graph with r > 0, then**
_α(G) ≤_ _µ(G),_
_and this bound is sharp._
The Theo heuristic would automatically detect that the inequality α(G) ≤ _µ(G)_
appears in both Conjecture 2 and in Conjecture 3, and thereafter, check if the set
of graphs in the database that satisfy the hypothesis of Conjecture 2 also satisfy
the hypothesis of Conjecture 3. Since every cubic graph is also a regular graph, but
not vice versa, Theo would remove Conjecture 2 from the possible conjectures to
present to the user. The reasoning for this heuristic is to put more emphasize on
the more general conjecture, and thus, remove any conjecture that may follow from
a more general statement.
The list of conjectures is (optionally) further filtered by a variation of the Dalmation heuristic which we call Dalmation-static. Unlike the original Dalmation
heuristic used by GRAFFITI [24], our version of Dalmation takes as input a static
_list of conjectures and works as following:_
-----
**Dalmation-Static Heuristic:**
1. Let G be the set of graphs attaining equality in Conjecture 1 in the current
list of conjectures, recalling that the conjectures are sorted according to their
respective touch number.
2. For i = 2, . . ., N, if the set of graphs attaining equality in Conjecture i does
not contain a which is also graph not contained in G, then remove Conjecture
_i from the list of conjectures, otherwise, let G = G ∪Gi, where Gi is the set of_
graphs attaining equality in Conjecture i.
Finally, the conjectures are further filtered by removing any known conjectures.
This aspect of TxGraffiti requires maintenance and is one area where the program
would benefit from many mathematicians contributing to. Moreover, since conjectures are computed at each prompt to the program, anytime a new counter-example
is added to the database, new conjectures appear. This aspect motivated further
development of the program, where users may enter in counter-examples to better
the conjectures of the program. This functionality is available by emailing the author via the online website, but will in the future allow for a more straight-forward
approach.
The resulting set of conjectures is a set of conjectures that can be viewed as
_“mathematical strong”._ That is, the linear optimization methods first find proposed inequalities that are guaranteed to be sharp on a maximum number of graph
instances, thereafter, the Theo heuristic insures generality of the hypothesis for a
conjectures inequality, then the Dalmation-static heuristic insures presented conjectures only consist of inequalities providing “new” information, and finally the
touch number sorting insures conjectures at the beginning of a list are sharper than
ones that follow. In the following section we demonstrate how these processes show
promise for new mathematical insight in the realm of graph theory.
**3.4** **Code and Reproducibility**
For readily available examples of this process, see the GitHub repository associated
with the interactive website [12].
### 4 Results
In this section we present some results stimulated by conjectures of TxGraffiti. More
specifically, we highlight the following results inspired by conjectures of TxGraffiti
and listed in Table 1. Of the results listed in Table 1, we now focus on the result
pertaining to the independence number and matching number of regular graphs.
The originial conjecture that stimulated this result states that for any 3-regular
and connected graph G, the independence number α(G) is at most the matching
number µ(G).
**Conjecture 4. If G is a connected and cubic (3-regular) graph, then**
_α(G) ≤_ _µ(G),_
_where α(G) is the independence number and µ(G) is the matching number._
Notably, Conjecture 4 relates three of the oldest studied properties in graph
theory; namely, independent sets, matching sets, and regular graphs. For this
-----
|Conjecture|Graph Family|Authors and Publication|
|---|---|---|
|α(G) µ(G) ≤|cubic graphs|Caro et al. [6]|
|Z(G) β(G) ≤|claw-free graphs|Brimkov et al. [2]|
|α(G) 3γ (G) ≤ 2 t|cubic graphs|Caro et al. [7]|
|α(G) γ (G) ≤ 2|claw-free graphs|Caro et al. [7]|
|γ (G) 3µ(G) e ≥ 5|cubic graphs|Caro et al. [7]|
|Z(G) 2γ(G) ≤|cubic graphs|Davila and Henning [17]|
|Z (G) 3γ (G) t ≤ 2 t|cubic graphs|Davila and Henning [15]|
|Z(G) γ(G) + 2 ≤|cubic claw-free graphs|Davila [13]|
**Conjecture** **Graph Family** **Authors and Publication**
_α(G) ≤_ _µ(G)_ cubic graphs Caro et al. [6]
_Z(G) ≤_ _β(G)_ claw-free graphs Brimkov et al. [2]
_α(G)_ cubic graphs Caro et al. [7]
_≤_ [3]2 _[γ][t][(][G][)]_
_α(G)_ _γ2(G)_ claw-free graphs Caro et al. [7]
_≤_
_γe(G) ≥_ 5[3] _[µ][(][G][)]_ cubic graphs Caro et al. [7]
_Z(G) ≤_ 2γ(G) cubic graphs Davila and Henning [17]
_Zt(G) ≤_ [3]2 _[γ][t][(][G][)]_ cubic graphs Davila and Henning [15]
_Z(G) ≤_ _γ(G) + 2_ cubic claw-free graphs Davila [13]
Table 1: Notable conjectures in graph theory generated by TxGraffiti
and their corresponding publications.
reason, the author did not share Conjecture 4 for many months, believing it to be
trivial known. However, once shared with collaborators, it became apparent that
not only was this conjecture not known in the literature, but was also true. Indeed,
this conjecture was then generalized and proven, resulting in the following theorem;
the proof of which appears in [6], but is also given below to demonstrate this simple
and meaningful result.
**Theorem 5 (Caro et al. [6]). If G is an r-regular graph with r > 0, then**
_α(G) ≤_ _µ(G),_
_and this bound is sharp._
_Proof. Let G be an r-regular graph with r > 0. Let X ⊆_ _V (G) be a maximum_
independent set, and let Y = V (G) \ X. By removing edges from G that have both
endpoints in Y, we form a bipartite graph H with partite sets X and Y .
Since the removed edges were only those with both endpoints in Y, any vertex
in X will have the same open neighborhood in H as it does in G. Given that
_G is r-regular and X is an independent set, each vertex in X will have exactly r_
neighbors in Y .
Let S _X be chosen arbitrarily, and let e(S, NH_ (S)) denote the number of edges
_⊆_
from S to NH (S). Since each vertex in S has exactly r neighbors in Y, it follows
that e(S, NH (S)) = r _S_ . Additionally, since each vertex in NH (S) has at most r
_|_ _|_
neighbors in X, we also have e(S, NH (S)) _r_ _NH_ (S) . Thus, r _S_ _r_ _NH_ (S),
_≤_ _|_ _|_ _|_ _| ≤_ _|_ _|_
implying that _S_ _NH_ (S) . By Hall’s Theorem; see West [35], there exists a
_|_ _| ≤|_ _|_
matching M that can match X to a subset of Y . Since X is a maximum independent
set and M is also a matching in G, we conclude that α(G) = |M _| ≤_ _µ(G), proving_
the theorem.
Notably, by confirming Conjecture 2 with Theorem 5, we were inspired to include the more general hypothesis of regular graphs in conjectures presented by
TxGraffiti, and this resulted in the more general statement of Theorem 5 being
presented as a conjecture by the program. From an application point of view, the
resulting theorem due to Conjecture 2 is further interesting since the computation
of the matching number µ(G) is computable in polynomial time, whereas the computation of the independent number α(G) is NP-hard. Thus, the resulting theory
gathered from investigating Conjecture 2 has practical applications in graph theory
and the sciences.
-----
### 5 Conclusion
In this paper, we have described the artificial intelligence program TxGraffiti and
also provided evidence for its usefulness in mathematical research as its conjectures span various areas of graph theory; many leading to significant publications.
Moreover, we provide a new web-based interaction for TxGraffiti which may lead
to even further mathematical insight. We anticipate that further development and
application of the ideas underpinning TxGraffiti will lead to further insights into
computer assisted mathematics.
### References
[1] AIM Special Work Group, Zero forcing sets and the minimum rank of graphs,
_Linear Algebra Appl., 428 (7) (2008), 1628–1648._
[2] B. Brimkov, R. Davila, H. Schuerger, and M. Young, Computer assisted discovery: Zero forcing vs vertex cover, available at
[https://arxiv.org/pdf/2209.04552.pdf, (2022).](https://arxiv.org/pdf/2209.04552.pdf)
[3] G. Caporossi and P. Hansen, Variable neighborhood search for extremal
graphs: 1 The autographix system, Discrete Math. 212(1–2) (2000), 29–44.
[4] G. Caporossi and P. Hansen, Variable neighborhood search for extremal
graphs: 5 Three ways to automate finding conjectures, Discrete Math. 276(1–
**3) (2004), 81–94.**
[5] M. Aouchiche, G. Caporossi, P. Hansen, and M. Laffay, Autographix: A survey,
_Electron. Notes Discrete Math. 22 (2005), 515–520._
[6] Y. Caro, R. Davila, and R. Pepper, New results relating matching and independence, Discuss. Math. Graph Theory 42 (2020), 921–935.
[7] Y. Caro, R. Davila, M.A. Henning, and R. Pepper, Conjectures of TxGraffiti:
Independence, domination, and matchings, Australas. J. Comb. 84(2) (2022),
258–274.
[8] C. Chekuri and N. Korula, A graph reduction step preserving elementconnectivity and applications, Automata, Languages, and Programming,
(2009) 254–265.
[9] S. Colton, A. Bundy and T. Walsh, Automated concept formation in pure
mathematics, Proc. of the 16th Int. Jt. Conf. on Artif. Intell., vol. 2, IJCAI’99,
Morgan Kaufmann Publishers (1999), 786–791.
[10] S. Colton, Refactorable numbers—a machine invention, J. Integer Seq. 2
(1999), Article 99.1.2.
[11] S. Colton, Automated Theory Formation in Pure Mathematics, Springer, Heidelberg (2002).
[12] R. Davila, TxGraffiti, https://txgraffiti.streamlit.app, accessed 2024-06-19.
[13] R. R. Davila, Another conjecture of TxGraffiti concerning zero forcing and
[domination in graphs, arXiv preprint arXiv:2406.19231, (2024). Available at](http://arxiv.org/abs/2406.19231)
[https://doi.org/10.48550/arXiv.2406.19231.](https://doi.org/10.48550/arXiv.2406.19231)
[14] R. Davila, Total and Zero Forcing in Graphs, University of Johannesburg, PhD
Thesis (2019).
-----
[15] R. Davila and M. A. Henning, Total forcing versus total domination in cubic
graphs, Appl. Math. Comput., 354 (2019), 385–395.
[16] R. Davila and M.A. Henning, Zero forcing in claw-free cubic graphs, Bull.
_Malays. Math. Sci. Soc., 43 (2020), 673–688._
[17] R. Davila and M. A. Henning, Zero forcing versus domination in cubic graphs,
_J. Comb. Optim., 41 (2021), 553–577._
[18] A. Davies, P. Veliˇckovi´c, L. Buesing, S. Blackwell, D. Zheng, N. Tomaˇsev, R.
Tanburn, P. Battaglia, C. Blundell, A. Juh´asz, M. Lackenby, G. Williamson,
D. Hassabis, and P. Kohli, Advancing mathematics by guiding human intuition
with AI, Nature, 600 (2021), 70–74.
[19] E. DeLaVi˜na, Graffiti.pc: A variant of Graffiti, DIMACS Ser. Discret. Math.
_Theor. Comput. Sci. 69 (2005), p. 71._
[20] E. DeLaVi˜na, Some history of the development of Graffiti, Graphs and Discov_ery, DIMACS Ser. Discret. Math. Theor. Comput. Sci. 69, Amer. Math. Soc.,_
Providence, RI (2005), 81–118.
[21] S.L. Epstein, On the discovery of mathematical theorems, IJCAI (1987), 194–
197.
[22] S.L. Epstein, Learning and discovery: One system’s search for mathematical
knowledge, Comput. Intell. 4(1) (1988), 42–53.
[23] S. Fajtlowicz, On conjectures of GRAFFITI, Discret. Math., 72 (1988), 113–
118.
[24] C. E. Larson and N. Van Cleemput. Automated conjecturing I: Fajtlowicz’s
Dalmatian heuristic revisited, Artif. Intell. 231 (2016), 17–38.
[25] D.B. Lenat, The ubiquity of discovery, Artif. Intell. 9(3) (1977), 257–285.
[26] D.B. Lenat, On automated scientific theory formation: A case study using the
am program, Mach. Intell. 9 (1979), 251–286.
[27] D.B. Lenat, The nature of heuristics, Artif. Intell. 19(2) (1982), 189–249.
[28] E. Flandrin, R. Faudree, and Z. Ryj´aˇcek, Claw-free graph - a survey, Discrete
_Math. 214 (2016), 196–200._
[29] D. Nakamura and A. Tamura, A Revision of Minty’s Algorithm for Finding a
Maximum Weight Stable Set of a Claw-Free Graph, J. Oper. Res. Soc. Japan
**44 (2001), 194–204.**
[30] W. McCune, Solution of the Robbins problem, J. Autom. Reason., 19(3)
(1997), 263–276.
[31] H. M´elot, Facet defining inequalities among graph invariants: The system
graphedron, Discrete Appl. Math. 156(10) (2008), 1875–1891.
[32] H.A. Simon, A. Newell, Heuristic problem solving: the next advance in operations research, Oper. Res., 6(1) (1958), 1–10.
[33] A. Turing, Intelligent machinery. The Essential Turing, (2004), 395–432.
[34] H. Wang, Computer theorem proving and artificial intelligence, Computation,
_Logic, Philosophy, Springer (1990), 63–75._
[35] D. B. West, Introduction to Graph Theory 2nd Edition. Prentice-Hall (20010.
ISBN: 0-13-014400-2 (print)
10
-----
| [
"Randy, Davila"
] | 2024-07-03T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2407.02731v1 | https://arxiv.org/abs/2407.02731 | null |
Assessing the Emergent Symbolic Reasoning Abilities of Llama Large Language Models | Large Language Models (LLMs) achieve impressive performance in a wide range of tasks, even if they are often trained with the only objective of chatting fluently with users. Among other skills, LLMs show emergent abilities in mathematical reasoning benchmarks, which can be elicited with appropriate prompting methods. In this work, we systematically investigate the capabilities and limitations of popular open-source LLMs on different symbolic reasoning tasks. We evaluate three models of the Llama 2 family on two datasets that require solving mathematical formulas of varying degrees of difficulty. We test a generalist LLM (Llama 2 Chat) as well as two fine-tuned versions of Llama 2 (MAmmoTH and MetaMath) specifically designed to tackle mathematical problems. We observe that both increasing the scale of the model and fine-tuning it on relevant tasks lead to significant performance gains. Furthermore, using fine-grained evaluation measures, we find that such performance gains are mostly observed with mathematical formulas of low complexity, which nevertheless often remain challenging even for the largest fine-tuned models. | It is observed that both increasing the scale of the model and fine-tuning it on relevant tasks lead to significant performance gains, which are mostly observed with mathematical formulas of low complexity, which nevertheless often remain challenging even for the largest fine-tuned models. | ## Assessing the Emergent Symbolic Reasoning Abilities of Llama Large Language Models
Flavio Petruzzellis[1], Alberto Testolin[1][,][2], and Alessandro Sperduti[1]
1
Department of Mathematics, University of Padova, Padova, Italy
2
Department of General Psychology, University of Padova, Padova, Italy
**Abstract. Large Language Models (LLMs) achieve impressive perfor-**
mance in a wide range of tasks, even if they are often trained with
the only objective of chatting fluently with users. Among other skills,
LLMs show emergent abilities in mathematical reasoning benchmarks,
which can be elicited with appropriate prompting methods. In this work,
we systematically investigate the capabilities and limitations of popular
open-source LLMs on different symbolic reasoning tasks. We evaluate
three models of the Llama 2 family on two datasets that require solving
mathematical formulas of varying degrees of difficulty. We test a generalist LLM (Llama 2 Chat) as well as two fine-tuned versions of Llama 2
(MAmmoTH and MetaMath) specifically designed to tackle mathematical problems. We observe that both increasing the scale of the model
and fine-tuning it on relevant tasks lead to significant performance gains.
Furthermore, using fine-grained evaluation measures, we find that such
performance gains are mostly observed with mathematical formulas of
low complexity, which nevertheless often remain challenging even for the
largest fine-tuned models.
**Keywords: LLMs · open source · mathematical reasoning · formulas ·**
ListOps · arithmetic
**1** **Introduction**
Large language models (LLMs) featuring billions of parameters can exhibit sophisticated cognitive skills as a by-product of their design and training process
rather than being explicitly trained to learn such skills. For example, after being
trained on large unlabeled corpora of text with the objective of predicting the
next word in a sentence, without additional fine-tuning, LLMs can be “prompted”
to perform a remarkable variety of tasks, including two-digit arithmetic, question answering, text summarization, and language translation [2]. Mathematical reasoning is still considered a challenge for most LLMs [16], but it can be
partially elicited in large models using prompting techniques such as chain-ofthought prompting [21]. However, whether symbolic reasoning can be considered
an emerging ability is still a matter of intense debate [15,20].
In this work, we study the symbolic reasoning abilities of various large language models, focusing on open-source models from the Llama 2 family [17].
-----
2 F. Petruzzellis et al.
We adopt a symbolic reasoning benchmark consisting of synthetic datasets of
mathematical formulas characterized by the possibility of manipulating problem
difficulty at a fine-grained level. Although such problems may appear, and indeed are, distant from typical applications of LLMs, they can be effectively used
to systematically study the reasoning skills and the limitations of these models.
We tested Llama 2 models of different sizes, as well as MetaMath [22] and
MAmmoTH [23], two versions of the base Llama 2 model fine-tuned to solve
mathematical problems. In our experiments, we observe that Llama 2 models become more capable of solving symbolic reasoning problems as their size
grows and that fine-tuning on domain-specific problems can further improve
their performance. At the same time, by carefully analyzing how performance
improvements are related to problem difficulty, we find that the emergence of
symbolic reasoning is mainly observed when models (especially fine-tuned ones)
are probed with relatively simple formulas.
**2** **Related works**
The concept of emergence [1] has been successfully used to characterize the
dynamics of neural networks [5] and describe cognitive phenomena in terms of
self-organizing principles [8,24]. In recent years, it has attracted the interest of
the AI community following the observation that unexpected cognitive abilities
could emerge by simply increasing the size of deep learning models [20].
Indeed, transformer-based models from the GPT family were first shown to
exhibit remarkable abilities to perform new tasks from textual instructions or
from a few examples, while only being trained to autoregressively predict the
next word in a sentence [2]. Since then, several works have shown that LLMs
can carry out many new tasks that were not explicitly included in their training
data, including solving arithmetic operations [2] and performing commonsense
reasoning [20]. The ability of these models to reason about novel problems “zeroshot”, that is, without any direct training, has been directly compared to the
human ability to reason by analogy [19].
Reasoning abilities are generally tested in LLMs using commonsense reasoning tasks, math word problems and symbolic reasoning benchmarks [7,18,21].
The latter is a class of synthetically generated problems whose difficulty can be
systematically manipulated by deriving complex problems from the composition
of simple ones. One example is the “last letter concatenation” task [21], in which
the model is asked to output the string of characters obtained by concatenating
the last character of the words in a list (which can be arbitrarily long). Another
one is the “coin flip” problem [21], a version of the parity problem in which the
model should track the state of a coin after a variable number of flips.
A significant boost in performance on these benchmarks has been obtained
using chain-of-thought prompting [21] and its variants [7,18]. These prompting
methods let the model produce a sequence of reasoning steps through which it
can exploit the knowledge contained in the prompt to gradually derive the final
answer.
-----
Assessing the Emergent Symbolic Reasoning Abilities of Llama LLMs 3
**3** **Methodology**
In this work, we focused on a class of challenging reasoning problems in which the
model is tasked to solve symbolic mathematical formulas. As in other symbolic
reasoning benchmarks used to test Large Language Models, the problems we
examined do not require verbal reasoning to be solved. Instead, they require the
ability to discover and systematically apply an algorithmic reasoning procedure
that should enable generalization to problems of any level of difficulty.
We considered symbolic formulas that can be nested, which means that each
operand in a formula can in turn be another formula. This implies that, in
general, several reasoning steps might be required to solve a given formula. Indeed, similarly to other symbolic reasoning tasks, any formula can be solved by
the iterative application of a simple rewriting rule. For example, the arithmetic
expression (12+(3-(4+5))) can be solved by first identifying a solvable subexpression, i.e. (4+5), and then substituting the result obtained by solving that
sub-expression, namely 9, into the original expression, obtaining a simpler expression, i.e. (12+(3-9)), which can be further simplified by iteratively applying
the same procedure. We note that, given this problem structure, chain-of-thought
prompting could in principle allow LLMs to solve these tasks by decomposing
each formula and iteratively solving its simplest components.
Since the complexity of formulas can be characterized in terms of two parameters, that is, the nesting level and the number of operands involved, the
reasoning abilities of LLMs can be analyzed in a finer-grained way compared
to other reasoning benchmarks. This allowed us to compare the performance
of models of different sizes on problems of varying levels of difficulty and thus
investigate the dynamics of emergence of symbolic reasoning abilities.
**3.1** **Symbolic formulas**
We considered two types of formulas: operations on lists of single-digit integers
derived from the ListOps dataset [10] and arithmetic expressions [13].
The ListOps dataset was introduced to evaluate the capacity of neural networks to build parse trees of nested formulas. The original dataset included
formulas composed of operations on lists of integers, including minimum, maximum, median, and sum modulo 10 of a list of integers. To reduce the complexity
of the problem, we only used minimum, maximum, and sum modulo 10. We also
built data splits of ListOps formulas whose level of difficulty could be precisely
characterized. In particular, we made it possible to specify the number of arguments that appear in formulas at any level, and we fixed the number of nesting
points at each level of a formula to two, as in the case of arithmetic formulas (see
below). We evaluated models on ListOps formulas that had two to four operands
and one to four nesting levels. Furthermore, we slightly modified the original format of the ListOps formulas by using a more explicit functional notation since
it has been observed that the notation used to represent symbolic formulas can
strongly influence the performance of transformers on arithmetic tasks [11]. For
example, the formula [MAX 3 9 1] was rewritten in the new format as MAX(3,
-----
4 F. Petruzzellis et al.
```
9, 1), since this notation is more likely to be observed in other mathematical
```
datasets used for training and fine-tuning of the LLMs considered here.
For the arithmetic task, we generated arithmetic expressions with sum, subtraction, and multiplication operations between two integers sampled in the interval [−99, 99]. We considered formulas with one to four nesting levels. For each
nesting level, a formula was nested in two points: that is, exactly two operands
on that level could be other formulas. In this work, we were more interested in
testing the capacity of Large Language Models to systematically execute a sequence of operations, rather than their mathematical competence in arithmetic
with multi-digit numbers [3]. Therefore, when computing the final values of formulas we used the modulo 100 of the intermediate results, as done in previous
work on systematic generalization in transformers [4].
**3.2** **Models**
We evaluated the symbolic reasoning abilities of three models: Llama 2 Chat
[17], and two fine-tuned versions, MAmmoTH [23] and MetaMath [22]. For all
of them, we considered three model sizes with 7B, 13B, and 70B parameters.
The Llama 2 Chat model is a large language model optimized for dialogue use
cases, trained on a mix of publicly available data. It generally performs better
than existing open-source models, approaching some of the most powerful closedsource models on a series of safety benchmarks, and it achieves high performance
on a variety of tasks ranging from common sense reasoning to world knowledge,
reading comprehension, and mathematical problem solving [17].
We also chose to test two recently proposed fine-tuned versions of Llama 2,
MetaMath and MAmmoTH, that were designed to improve the mathematical
reasoning abilities of the base model using different fine-tuning strategies. MetaMath has been fine-tuned on MetaMathQA, a companion dataset created by
bootstrapping samples in the GSM8K and MATH datasets by rephrasing both
questions and answers with the aim of increasing variety in the training samples
[22]. MAmmoTH was created with the aim of generalizing to many different
mathematical and reasoning domains, hence it has been fine-tuned on eight different popular benchmarks and evaluated on both in-domain and out-of-domain
problems from different datasets [23].
**3.3** **Prompting Strategies**
In order to elicit the emergence of reasoning abilities in Llama 2 Chat, we
have initially tested zero-shot chain-of-thought prompting, a recently proposed
prompting method that was shown to achieve similar performance as chainof-thought prompting without the need to craft exemplars [7,14]. However, we
observed that Llama 2 Chat already produced reasoning steps in the output
using zero-shot prompting, presumably as the result of fine-tuning with reinforcement learning through human feedback [12], and that zero-shot chain-ofthought prompting did not further improve the model’s performance. Therefore,
we opted for a simpler zero-shot prompting strategy, in which we briefly describe
-----
Assessing the Emergent Symbolic Reasoning Abilities of Llama LLMs 5
the task and then directly ask the model to solve it. In the case of the ListOps
dataset, we also briefly describe the semantics of the operators that appear in
the expression. For example, a Llama 2 prompt to solve a ListOps formula could
be the following: MIN, MAX and SM are operators on lists of single-digit integers
_which have the semantics of minimum, maximum and sum modulo 10, respec-_
_tively. Solve the following expression involving these operators: MAX(3, 9, 1)._
_Give the final answer stating ‘The final answer is: <NUMBER>’._
In the case of MAmmoTH and MetaMath, we instead used the official prompting strategy used by the authors of the model during fine-tuning. The prompt
formats for the two models are similar: they both include an initial sentence
introducing a generic task to the model followed by task-dependent instructions
and a request to produce a response. In our case, we used the Llama 2 zero-shot
prompt as a description of the task to be solved and we inserted it in the official
prompt format. The MetaMath prompt contained an explicit request to solve
the problem step-by-step, while this was not required with MAmmoTH models
since they have been fine-tuned with reasoning problems solved via chain-ofthought prompting, and thus produce a sequence of reasoning steps by default.
For example, a MetaMath prompt to solve an arithmetic formula could be the
following: Below is an instruction that describes a task. Write a response that
_appropriately completes the request. Instruction: Solve the following arithmetic_
_expression: ((86+51)+(-74-35)). Take the modulo 100 of intermediate values,_
_i.e. keep the last two digits of the number with the sign. Give the final answer_
_stating ‘The final answer is: <NUMBER>’. Response: Let’s think step by step._
To extract the final answer from the model response we used regular expressions to match integers and take the last one appearing in the text. We
measured the performance of all models using sequence accuracy, which means
that an output was considered correct only if it exactly matched the target.
**4** **Experiments and results**
In this section, we present the results of our evaluations of the three models. We
first focused on the impact of model size on the performance, and then study
how performance is modulated by different levels of problem difficulty. Finally,
we take a closer look at the errors committed by the models on the simplest
examples, to better characterize their capacities and limitations.
**4.1** **Model size improves global accuracy**
Fig. 1 shows the performance of the three models considered on ListOps and
Arithmetic formulas, averaged across formulas of any nesting level and with any
number of operands. From this coarse-grained analysis, we can already observe
that accuracy always increases with model size. In particular, models with 13B
parameters seem to develop significantly better symbolic reasoning abilities than
their smaller 7B versions, and accuracy improves further in the largest models.
-----
F. Petruzzellis et al.
ListOps Arithmetic
0.60 ModelLlama2 0.20
0.55 MAmmoTH 0.18
MetaMath
0.15
0.50
0.12
0.45
0.10
Accuracy0.40
0.08
0.35
0.05
0.30
0.03
0.25
10 20 30 40 50 60 70 10 20 30 40 50 60 70
Model parameters (billions) Model parameters (billions)
Fig. 1: Average accuracy on the ListOps and Arithmetic tasks obtained by Llama
2, MAmmoTH, and MetaMath models of increasing size. Larger models (especially if fine-tuned on math tasks) achieve better performance than smaller ones.
From the aggregate performance metrics, it is also clear that solving ListOps
formulas is much easier than solving arithmetic ones, even for models that have
been specifically tuned on mathematical reasoning datasets (note the different
scale of the y-axes). This phenomenon could be due to the fact that arithmetic
formulas involve complex operations like multiplication between two-digit integers, and calculation of the modulo 100 of intermediate results.
It is also evident that fine-tuning Llama2 on mathematical problems improves
its capacity to solve the kind of math-based symbolic reasoning problems used in
our experiments. MetaMath models (especially the medium and large versions)
outperform MAmmoTH models on both tasks, probably because they have been
fine-tuned only on mathematical problems that are more similar to the ones we
consider here. Furthermore, in the case of ListOps, we find that fine-tuning
is especially effective in boosting the performance of 70B models, leading to
gains of 9% (MAmmoTH) and 17% (MetaMath) compared to the 70B Llama 2
model. This could indicate that in the case of tasks consisting of a composition
of relatively simple elementary operations, greater performance improvements
can be achieved by fine-tuning large models.
**4.2** **Accuracy improves more on simple formulas**
We now aim to better understand where the performance gain obtained by larger
models is concentrated. In Fig. 2, we report a fine-grained measure of the accuracy of the three models on groups of ListOps and arithmetic formulas of
increasing nesting levels. As expected, all models are generally more accurate
when solving formulas in the easiest splits of both tasks (i.e., formulas with only
one or two nested expressions). This indicates that increasing the nesting level
of a formula indeed makes the problem more difficult for all the models.
We further notice that as model size grows, accuracy generally increases more
on formulas with nesting levels 1 and 2, both for Arithmetic and ListOps. For
-----
Assessing the Emergent Symbolic Reasoning Abilities of Llama LLMs 7
ListOps
Llama2 MAmmoTH MetaMath
1.0
N1 N3
0.8 N2 N4
0.6
Accuracy0.4
0.2
10 20 30 40 50 60 70 10 20 30 40 50 60 70 10 20 30 40 50 60 70
Model parameters (billions) Model parameters (billions) Model parameters (billions)
Arithmetic
0.5 Llama2 MAmmoTH MetaMath
0.4
0.3
0.2
Accuracy
0.1
0.0
10 20 30 40 50 60 70 10 20 30 40 50 60 70 10 20 30 40 50 60 70
Model parameters (billions) Model parameters (billions) Model parameters (billions)
Fig. 2: Accuracy of Llama 2, MAmmoTH and MetaMath models on ListOps (top)
and arithmetic (bottom) formulas of varying levels of difficulty as a function of
model size. Nk indicates formulas with nesting level k.
ListOps we observe a slight improvement in accuracy with model size also for the
most complicated formulas, while for the most challenging arithmetic problems
the improvement is almost null. It is also interesting to notice that the largest
version of MetaMath (70B) is slightly less accurate than the intermediate version
(13B) in arithmetic problems with a single nesting, suggesting that scaling-up
model size might be detrimental in some cases [9].
These results suggest that the emergent symbolic reasoning abilities observed
in the largest models do not yet allow for compositional generalization [6], being mostly effective on relatively simple formulas. This holds even for fine-tuned
models, since their accuracy on the most challenging problems (four nested formulas) is comparable to that achieved by the base Llama 2 models.
**4.3** **Analysis of errors on simple formulas**
In order to solve the symbolic reasoning problems considered in our experiments,
the models should be able to solve both atomic operations, such as summing two
numbers or finding the maximum in a list, and apply the correct sequence of
solution steps in nested formulas. Since we generally observed that performance
mostly improved on simpler problems, in the following analysis we focus on
studying the capacity of the models to solve the simplest arithmetic and ListOps
formulas. To this aim, we isolated and analyzed the errors on data splits with a
single nesting level.
-----
F. Petruzzellis et al.
7B 13B 70B
60
40
Llama220
0
60
40
20
MAmmoTH
0
60
40
Metamath20
0
MAX SM MIN MAX SM MIN MAX SM MIN
Operation Operation Operation
Fig. 3: Type of errors made by the
models on ListOps formulas with a single nesting level. Absolute number of
errors is on the y-axes and operator
used in the formula is on the x-axis.
7B 13B 70B
1.0
0.5
Llama2
0.0
1.0
0.5
MAmmoTH
0.0
1.0
0.5
Metamath
0.0
+ & y>0- & y>0* & y>0+ & y<0- & y<0* & y<0+ & y>0- & y>0* & y>0+ & y<0- & y<0* & y<0+ & y>0- & y>0* & y>0+ & y<0- & y<0* & y<0
Type of formula Type of formula Type of formula
Fig. 4: Type of errors made by the
models on arithmetic formulas with
nesting level 1, grouped by operator
(+,-,∗) and sign of the result (y). Incidence of errors is measured by group.
In Fig. 3, we show the number of errors committed by the models on ListOps
formulas with a single nesting. We observe that for Llama 2, the scale of the
model allows to significantly improve its ability to solve min and max operations
(almost to perfection), while the sum modulo 10 becomes the most difficult
operation in the largest model, with a surprising deterioration in performance
compared to smaller-scale models. The trend is different for fine-tuned models,
which have presumably observed the sum modulo 10 operation more frequently
during training and can thus solve it better than the base model, with the largest
model versions reaching almost perfect accuracy on all atomic operations.
In Fig. 4, we show the mistakes made by the models on arithmetic formulas
with a single nesting level. We group input samples based on the type of operation appearing in the formulas and on the sign of the result, as we hypothesize
that operations involving negative operands could be more difficult than those
on positive ones. We then measure the incidence of errors in each group, i.e. the
fraction of formulas in each group that the model does not solve correctly. We
observe that both fine-tuning and reasoning abilities emerging with scale mainly
improve the models’ accuracy on formulas that have a positive result. By looking
at the reasoning steps produced by the models, we noticed that the vast majority of these errors are due to an incorrect calculation of the modulo operation,
which indeed proves to be more difficult for all models when it involves negative
operands.
-----
Assessing the Emergent Symbolic Reasoning Abilities of Llama LLMs 9
**5** **Conclusions**
Despite the widespread deployment of foundation models, we currently lack a
clear understanding of how they work, when they fail, and what are the capabilities and limitations of their seemingly emergent cognitive skills [15].
In this work, we studied the emergent symbolic reasoning abilities of opensource LLMs of the Llama family on mathematical formulas whose level of difficulty can be precisely manipulated. We considered Llama 2 Chat and two
variants of the model fine-tuned for mathematical reasoning, comparing the performance of small, medium, and large versions of each model.
We found that larger models are generally more capable of solving mathematical formulas compared to smaller ones, suggesting the emergence of symbolic reasoning abilities. However, a finer-grained analysis revealed that accuracy
mostly increased on formulas of low complexity, involving only a few nesting levels. Furthermore, by analyzing the models’ failures on such simple formulas, we
found that common expressions like the modulo operation can still represent a
challenge even for the largest and fine-tuned language models.
Overall, our results suggest that large language models still struggle in tasks
requiring symbolic reasoning, and further research is needed to design neural
architectures better suited for this type of tasks. While our findings are limited
to models in the Llama family, we believe that the proposed evaluation approach,
based on symbolic reasoning benchmarks in which the difficulty of samples can
be precisely characterized, can help to build a more sound understanding of the
potential and limitations of this technology.
**References**
1. Anderson, P.W.: More is different: Broken symmetry and the nature of the hierarchical structure of science. Science 177(4047), 393–396 (1972)
2. Brown, T., et al.: Language models are few-shot learners. Advances in neural information processing systems 33, 1877–1901 (2020)
3. Cognolato, S., Testolin, A.: Transformers discover an elementary calculation system
exploiting local attention and grid-like problem representation. In: 2022 International Joint Conference on Neural Networks (IJCNN). pp. 1–8. IEEE (2022)
4. Csordás, R., Irie, K., Schmidhuber, J.: The neural data router: Adaptive control flow in transformers improves systematic generalization. In: The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event,
[April 25-29, 2022. OpenReview.net (2022), https://openreview.net/forum?id=](https://openreview.net/forum?id=KBQP4A_J1K)
```
KBQP4A_J1K
```
5. Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences 79(8), 2554–
2558 (1982)
6. Hupkes, D., Dankers, V., Mul, M., Bruni, E.: Compositionality decomposed: How
[do neural networks generalise? J. Artif. Intell. Res. 67, 757–795 (2020). https:](https://doi.org/10.1613/JAIR.1.11674)
```
//doi.org/10.1613/JAIR.1.11674, https://doi.org/10.1613/jair.1.11674
```
-----
10 F. Petruzzellis et al.
7. Kojima, T., Gu, S.S., Reid, M., Matsuo, Y., Iwasawa, Y.: Large language models are zero-shot reasoners. In: Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A. (eds.) Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - Decem[ber 9, 2022 (2022), http://papers.nips.cc/paper_files/paper/2022/hash/](http://papers.nips.cc/paper_files/paper/2022/hash/8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html)
```
8bb0d291acd4acf06ef112099c16f326-Abstract-Conference.html
```
8. McClelland, J.L.: Emergence in cognitive science. Topics in cognitive science 2(4),
751–770 (2010)
9. McKenzie, I.R., et al.: Inverse scaling: When bigger isn’t better. CoRR
**abs/2306.09479** (2023). `https://doi.org/10.48550/ARXIV.2306.09479,`
```
https://doi.org/10.48550/arXiv.2306.09479
```
10. Nangia, N., Bowman, S.R.: Listops: A diagnostic dataset for latent tree learning. In: Cordeiro, S.R., Oraby, S., Pavalanathan, U., Rim, K. (eds.) Proceedings
of the 2018 Conference of the North American Chapter of the Association for
Computational Linguistics, NAACL-HLT 2018, New Orleans, Louisiana, USA,
June 2-4, 2018, Student Research Workshop. pp. 92–99. Association for Com[putational Linguistics (2018). https://doi.org/10.18653/V1/N18-4013, https:](https://doi.org/10.18653/V1/N18-4013)
```
//doi.org/10.18653/v1/n18-4013
```
11. Nogueira, R.F., Jiang, Z., Lin, J.: Investigating the limitations of the transformers
[with simple arithmetic tasks. CoRR abs/2102.13019 (2021), https://arxiv.](https://arxiv.org/abs/2102.13019)
```
org/abs/2102.13019
```
12. Ouyang, L., et al.: Training language models to follow instructions with
human feedback. In: Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D.,
Cho, K., Oh, A. (eds.) Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems
2022, NeurIPS 2022, New Orleans, LA, USA, November 28 - Decem[ber 9, 2022 (2022), https://papers.nips.cc/paper_files/paper/2022/hash/](https://papers.nips.cc/paper_files/paper/2022/hash/b1efde53be364a73914\f58805a001731-Abstract-Conference.html)
```
b1efde53be364a73914\f58805a001731-Abstract-Conference.html
```
13. Petruzzellis, F., Testolin, A., Sperduti, A.: A hybrid system for systematic generalization in simple arithmetic problems. In: Proceedings of the 17th International
Workshop on Neural-Symbolic Learning and Reasoning (2023)
14. Petruzzellis, F., Testolin, A., Sperduti, A.: Benchmarking GPT-4 on algorithmic
problems: A systematic evaluation of prompting strategies. In: Calzolari, N., Kan,
M.Y., Hoste, V., Lenci, A., Sakti, S., Xue, N. (eds.) Proceedings of the 2024 Joint
International Conference on Computational Linguistics, Language Resources and
Evaluation (LREC-COLING 2024). pp. 2161–2177. ELRA and ICCL, Torino, Italia
[(May 2024), https://aclanthology.org/2024.lrec-main.195](https://aclanthology.org/2024.lrec-main.195)
15. Schaeffer, R., Miranda, B., Koyejo, S.: Are emergent abilities of large language
models a mirage? arXiv preprint arXiv:2304.15004 (2023)
16. Testolin, A.: Can neural networks do arithmetic? a survey on the elementary numerical skills of state-of-the-art deep learning models. Applied Sciences 14(2), 744
(2024)
17. Touvron, H., et al.: Llama 2: Open foundation and fine-tuned chat models.
[CoRR abs/2307.09288 (2023). https://doi.org/10.48550/ARXIV.2307.09288,](https://doi.org/10.48550/ARXIV.2307.09288)
```
https://doi.org/10.48550/arXiv.2307.09288
```
18. Wang, X., Wei, J., Schuurmans, D., Le, Q.V., Chi, E.H., Narang, S., Chowdhery,
A., Zhou, D.: Self-consistency improves chain of thought reasoning in language
models. In: The Eleventh International Conference on Learning Representations,
[ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net (2023), https://](https://openreview.net/pdf?id=1PL1NIMMrw)
```
openreview.net/pdf?id=1PL1NIMMrw
```
-----
Assessing the Emergent Symbolic Reasoning Abilities of Llama LLMs 11
19. Webb, T., Holyoak, K.J., Lu, H.: Emergent analogical reasoning in large language
models. Nature Human Behaviour 7(9), 1526–1541 (2023)
20. Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama,
D., Bosma, M., Zhou, D., Metzler, D., et al.: Emergent abilities of large language
models. arXiv preprint arXiv:2206.07682 (2022)
21. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E.H., Le,
Q.V., Zhou, D.: Chain-of-thought prompting elicits reasoning in large language
models. In: Koyejo, S., Mohamed, S., Agarwal, A., Belgrave, D., Cho, K., Oh, A.
(eds.) Advances in Neural Information Processing Systems 35: Annual Conference
on Neural Information Processing Systems 2022, NeurIPS 2022, New Orleans, LA,
USA, November 28 - December 9, 2022 (2022)
22. Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok, J.T., Li, Z., Weller,
A., Liu, W.: Metamath: Bootstrap your own mathematical questions for large
[language models. CoRR abs/2309.12284 (2023). https://doi.org/10.48550/](https://doi.org/10.48550/ARXIV.2309.12284)
```
ARXIV.2309.12284, https://doi.org/10.48550/arXiv.2309.12284
```
23. Yue, X., Qu, X., Zhang, G., Fu, Y., Huang, W., Sun, H., Su, Y., Chen, W.:
Mammoth: Building math generalist models through hybrid instruction tuning.
[CoRR abs/2309.05653 (2023). https://doi.org/10.48550/ARXIV.2309.05653,](https://doi.org/10.48550/ARXIV.2309.05653)
```
https://doi.org/10.48550/arXiv.2309.05653
```
24. Zorzi, M., Testolin, A.: An emergentist perspective on the origin of number sense.
Philosophical Transactions of the Royal Society B: Biological Sciences 373(1740),
20170043 (2018)
-----
| [
"Flavio, Petruzzellis",
"Alberto, Testolin",
"Alessandro, Sperduti"
] | 2024-06-05T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2406.06588 | https://arxiv.org/abs/2406.06588 | https://www.semanticscholar.org/paper/77338785e65aacf4d378034ff9822c00af45506c |
Augmenting Large Language Models with Symbolic Rule Learning for Robust Numerical Reasoning | While some prompting strategies have been proposed to elicit reasoning in Large Language Models (LLMs), numerical reasoning for machine reading comprehension remains a difficult challenge. We propose a neuro-symbolic approach that uses in-context learning with LLMs to decompose complex questions into simpler ones and symbolic learning methods to learn rules for recomposing partial answers. We evaluate it on different numerical subsets of the DROP benchmark; results show that it is competitive with DROP-specific SOTA models and significantly improves results over pure LLM prompting methods. Our approach boasts data efficiency, since it does not involve any additional training or fine-tuning. Additionally, the neuro-symbolic approach facilitates robust numerical reasoning; the model is faithful to the passage it has been presented, and provides interpretable and verifiable reasoning traces. | A neuro-symbolic approach that uses in-context learning with LLMs to decompose complex questions into simpler ones and symbolic learning methods to learn rules for recomposing partial answers and is competitive with DROP-specific SOTA models and significantly improves results over pure LLM prompting methods. | # Augmenting Large Language Models with Symbolic Rule Learning for Robust Numerical Reasoning
**Hadeel Al-Negheimish[1]** **Pranava Madhyastha[2][,][1]** **Alessandra Russo[1]**
1
Imperial College London
2
City, University of London
**Abstract**
While some prompting strategies have been proposed to elicit reasoning in Large
Language Models (LLMs), numerical reasoning for machine reading comprehension remains a difficult challenge. We propose a neuro-symbolic approach that uses
in-context learning with LLMs to decompose complex questions into simpler ones
and symbolic learning methods to learn rules for recomposing partial answers. We
evaluate it on different numerical subsets of the DROP benchmark; results show
that it is competitive with DROP-specific SOTA models and significantly improves
results over pure LLM prompting methods. Our approach boasts data efficiency,
since it does not involve any additional training or fine-tuning. Additionally, the
neuro-symbolic approach facilitates robust numerical reasoning; the model is faithful to the passage it has been presented, and provides interpretable and verifiable
reasoning traces.
**1** **Introduction**
Numerical reasoning in Machine Reading Comprehension (MRC) is a challenging task; it involves
identifying the terms from a passage relevant to some complex question and reasoning about them.
This has been previously tackled with specialised architectures, some containing modules for each
reasoning type [3, 11, 5, 13], or with Reasoning Templates [1]. These approaches incurred a
significant data overhead to train, where an auxiliary search identifies all possible paths to the answer,
and an engineering overhead to extend to more reasoning types than have been previously defined.
Recent advancements in Large Language Models (LLMs) gave rise to prompting strategies to
elicit reasoning, like Chain-of-Thought and Successive Prompting [17, 6, 18, 4, 9]. In all of these
approaches, it falls upon the LLM to generate the steps needed to reason about a question, and in
some, it also calculates the final answer. The quality of the results is heavily dependent on the family
of LLMs used and the number of parameters it contains. Furthermore, how to ground models to the
given passage and make its reasoning faithful to its contents is not obvious.
In this approach, we follow the line divide-and-conquer approaches [1, 4, 18, 9] that break down
complex questions into simpler subquestions that are easier to answer with single-span reading
comprehension (RC) models. In contrast to previous approaches, we do not rely on templates or
LLMs for instructions on how to reason about the partial answers. Instead, we propose learning
symbolic rules that express the numerical reasoning needed to compute the final answer from partial
answers, given few-shot examples. We leverage in-context learning with LLMs to decompose
complex questions and symbolic learning methods (like ILASP [7]) to learn rules to recompose
partial answers. Figure 1 illustrates this neuro-symbolic approach.
We evaluate our approach on different numerical subsets of the DROP discrete reasoning MRC
benchmark [3]. Our results show that even without any special training or fine-tuning, it is competitive
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Workshop on MATH-AI.
-----
[...] As of the 2000 UnitedStates Census of 2000, therewere 47,829 people, 15,137 Decomposition(2) Questionvia LLM `q2q1` _How many familiesare there?How many householdsare there?_ (e.g. BERTQA)Single-hop RC `a2a1` 15,13710,898
households, and 10,898
families residing in the city. The (1) Get few-shot Calculate
population density was 7,921.7people per square mile examples Answer 4239
(3,057.4/km2). [...]
(3) Rule Learning to
**How many more householdsare there than families?** Partial AnswersRecompose `solution(V1,V2,V3):- subtraction(V1,V2,V3).`
via ILASP
Figure 1: Overview of approach: After collecting few-shot examples for a test question, LLMs are
used to decompose the complex question into simpler single-span extraction questions, symbolic
learning is used to induce the rule needed to arrive at the final answer.
with DROP-specific models for most splits. Our results further show that this approach significantly
improves performance over pure LLM prompting methods, in addition to bridging the gap in
performance between smaller and larger LLMs.
By using LLMs to solve small concrete tasks, and lifting the reasoning to a symbolic module, like
ILASP, we can show the reasoning steps that are used to arrive at the final answer, and be sure that
the answer is indeed based on that. Symbolic learning also allows generalisation with little data. This
approach combines the complementary strengths of LLMs and symbolic learning.
**2** **Our Approach**
At a high level, given a test question and passage, few-shot examples are selected from a small
annotated subset of training examples that contain question decompositions. These examples of
decompositions are then fed along with the test question to an LLM to be decomposed into simpler
subquestions, which can be fed to a single-span RC model to extract the partial answers. The few-shot
examples also form the basis for the positive examples fed to a Symbolic Learner to find a rule that
covers the operation used to reach the answer. The final answer for the test question is then calculated
using the learned rule and partial answers, as illustrated in Figure 1. The novelty of our approach
mainly lies in learning how to recompose partial answers using symbolic learning. No fine-tuning is
required for any component of this approach, where we opt to use off-the-shelf models; making it
generalisable, simple and cheap to implement.
**Collecting few-shot examples** In this work, we only need few-shot examples to tackle the complex
reasoning task. We build upon the small annotated subset provided by Successive Prompting [4],
which contains 300 examples from the DROP training set, with annotations of chain-of-thought
reasoning traces and question decompositions. These examples will be the basis from which we learn
how to decompose a complex test question and how to recompose partial answers. We explore two
approaches to select few-shot examples for each test question: the first is based on finding the nearest
neighbours of the complex test question in the embedding space of the annotated questions using
sentence embeddings [12], where the closest k annotated questions to a test question are retrieved
by querying an index _Q, which contains the annotated subset of questions. The second approach_
_I_
defines a canonical set of examples from the annotated questions for each given type, and transforms
the task into type-prediction; which we do by prompting an Alpaca [14] 7B model given a question
and a single demonstration of each of the types: addition, subtraction and negation. Throughout this
work, we use three examples (k = 3) for few-shot learning.
**Question decomposition** Given the few-shot examples collected in the previous step, we construct
a textual prompt to decompose a question into simpler ones using the annotated decompositions of
these examples as demonstrations, appended with the complex test question at the end. With this
prompt, an LLM generates a completion that contains simpler subquestions for the test question.
This is analogous to the question decomposition step in the in-context learning setting of Successive
Prompting [4], Self-ask Prompting [9] and Least-to-Most Prompting [18].
**Single-span reading comprehension** Once we have decomposed questions into simpler, singlespan extraction questions, we can use the subquestions to extract the appropriate terms for reasoning
-----
from the passage. We opt to make use of a pre-trained off-the-shelf single-span extraction model; a
BERT-based model fine-tuned on the popular single-span MRC benchmark SQuAD [10] to extract
two indices from the passage that denote the start and end of the answer span.
**Learning to recompose partial answers** We propose learning the rule needed to recompose partial
answers based on the few-shot examples, using a symbolic learner. An ideal candidate for this task is
the Inductive Learning from Answer Set Programs (ILASP) system [7]. It is a logic-based machine
learning system that induces ASP rules to cover the set of positive examples without covering negative
examples using efficient combinatorial search offered by answer-set solvers. The noisy learning task,
_ILPLOAS[noise]_ [=][ ⟨][B, S][M] _[, E][⟩][, enables robust learning of rules in the presence of noise in the examples]_
by assigning a penalty for not covering each example. Background knowledge, B, encodes what we
know about a problem, which allows injecting existing knowledge. In this case, we define the space
of possible operations (addition, subtraction and negation), type declarations, and constraints that
apply to this problem. The hypothesis space, SM, is defined by the mode declarations, which state
which predicates may appear in the head and body of learnable rules. In our formulation, positive
examples are created by augmenting the annotation of the gold answer for each of the few-shot
examples with partial answers retrieved by a single-hop RC model, given the annotated subquestions.
The task is to learn a rule that finds the final answer from the partial answers. A penalty is assigned
to each of the examples based on the scores given by the single-hop RC model, where a lower score
means that the RC model is less confident in the answer, and a smaller penalty should be incurred for
not covering it. A full example of this learning task is specified in Appendix (§5.3).
**Calculate answer** As the final step of our approach, the answer to the complex question is calculated
by applying the rule predicted by the symbolic learner on the retrieved partial answers of the simpler
questions from the single-hop RC model. In the example in Figure 1, this is done by subtracting the
two numbers 15137 and 10898. Since this component is applied symbolically, there is no risk of
miscalculating an answer given an operation and its operands.
**3** **Experiments**
We evaluate and compare our approach to different DROP [3] development set data splits, each
covering a different reasoning type. Subtraction Clean and Noisy include subtraction questions,
curated manually and heuristically, respectively. Arithmetic and Negation include a subset of 500
questions predicted as these types by MTMSN [5]. Further details about these splits are described in
Appendix (§5.1).
In this work, models are used off-the-shelf. No models are fine-tuned for our specific tasks. We
experimented with four different LLMs; the first two, OpenAI’s GPT3.5Turbo [8] and Cohere’s
Command [2] are accessed via an API. Since these models are proprietary, their architecture and
training procedure details are unknown, but they are expected to contain 52B parameters or more.
The other two are open-source, the LLaMa 7B [15], and Alpaca 7B [14] models, where the first is a
base model and the second is instruction fine-tuned similar to Wang et al. [16].
Our baselines include models were designed specifically for DROP, the problem decomposition
model Reasoning Templates[1] and the module-based MTMSN [5], in addition to pure prompting
methods of LLMs[1]: the first is Zero-Shot evaluation, where the prompt only includes the test question
and passage. The second is Chain-of-Thought Prompting [17], which uses the chain-of-thought
annotation for the 3-nearest neighbours in the small annotated set. Furthermore, two ablations are
considered for our approach: one where we assume that we know the type of the test question
(gold-type) and its canonical set of few-shot examples for that type are used; an upper-bound for our
approach, and another where random examples are used from the small annotated set; a lower bound.
Table 1 shows our results using the accuracy of the final answer.
Firstly, evaluations show that our approach surpasses Reasoning Templates when using a gold-type,
even though no templates are engineered. Type prediction performs almost as well as using gold-type
for all models. Our approach is competitively close to MTMSN, surpassing it on the Negation
type. While MTMSN has higher accuracy for more settings, we note that MTMSN has been trained
specifically for this task with large amounts of data, and generalising to more reasoning beyond
1We set Temperature to 0 to reduce variation and make the generations more deterministic
-----
Table 1: Accuracy of the final answer for our approach, compared to benchmark-specific models, in
addition to pure LLM prompting baselines.
**Model** **Subtraction** **Subtraction** **Arithmetic** **Negation**
**Clean** **Noisy**
Reasoning Templates 74.40 64.00 26.00 0.00
- Subtraction [1]
MTMSN [5] 86.50 81.30 72.60 94.20
cohere command 17.31 15.58 19.20 13.60
GPT3.5Turbo 73.08 67.83 58.80 57.60
llama7b 5.77 5.49 9.40 4.60
alpaca7b 0.00 3.25 6.60 4.20
cohere command 46.15 46.19 43.40 68.40
GPT3.5Turbo 67.31 67.71 61.20 79.60
llama 7b 30.77 25.90 29.40 42.40
alpaca 7b 21.15 20.51 20.80 18.20
cohere command 80.77 (+34.62) 65.70 (+19.51) - 93.80 (+25.4)
llama7b 76.92 (+46.15) 64.23 (+38.33) - 95.60 (+53.2)
alpaca7b 78.85 (+57.70) 63.57 (+43.06) - 94.40 (+76.2)
cohere command 48.08 37.78 16.40 0.00
llama7b 26.92 29.48 16.40 0.00
alpaca7b 36.54 33.18 15.60 0.00
cohere command 17.31 14.24 7.40 0.00
llama7b 5.77 8.18 3.40 0.20
alpaca7b 11.53 9.53 4.40 0.00
cohere command 80.77 58.40 29.40 90.80
llama7b 75.00 57.06 27.20 91.20
alpaca7b 76.92 55.94 31.00 90.20
**DROP-specific**
**baselines**
**Pure LLM**
**Prompting**
**Ours**
_Zero-shot_
_3-shot KNN_
_Chain-of-Thought_
_Using a gold type of_
_3 examples per_
_reasoning type_
_Using 3 KNN_
_examples from_
_annotated 300 set_
_Using 3 random_
_examples from_
_annotated 300 set_
_Using Type_
_Prediction (alpaca7b_
_model)_
the previously defined ones would involve re-engineering its architecture and retraining the model.
Whereas in our approach, the learning task is in charge of identifying the reasoning involved, and it is
lightweight to extend.
Furthermore, we observe that our approach improves performance for all LLMs (absolute improvement is highlighted in parentheses, we exclude GPT3.5Turbo due to concerns of data contamination)
over pure prompting methods. We also find that our approach bridges the gap between smaller and
larger LLMs (7B and 52B), where they have comparable performance with each other, whereas the
difference in performance is stark in the pure prompting setting.
While using KNN is better than using random few-shot examples, performance remains suboptimal,
indicating an issue with this component. We investigate this issue and find that the similarity function
of sentence embeddings [12] does not necessarily retrieve questions that have similar reasoning, and
seems to reflect lexical similarity more. While we use an encoder trained on the QQP-paraphrasing
task, this issue could have arisen from data scarcity in the small annotated subset; meaning that
questions of a similar type can be distant in the embedding space. See Appendix (§5.2) for examples.
In this proof-of-concept, we have defined rule learning for a limited space of operations (subtrac_tion, addition, and negation) with shallow reasoning. However, despite these limitations, we have_
demonstrated in this paper that using this neuro-symbolic approach is encouraging; it facilitates
robust numerical reasoning, as shown by the significant improvement over pure LLM prompting.
It is interpretable, reuses smaller, modular components without the need to collect large amounts
of training data, and provides some guarantees on the provided reasoning traces. Future work will
involve extending the task to allow for nested reasoning and learning commonsense knowledge to
solve other complex questions beyond numerical reasoning.
**4** **Conclusion**
In this work, we proposed a neuro-symbolic approach to tackle numerical reasoning problems in
MRC. It leverages in-context learning with LLMs to decompose complex questions and symbolic
learning methods to learn rules for recomposing partial answers. We show that this simple approach
is comparable with DROP-specific SOTA models, despite not needing large amounts of training data.
It also bridges the gap between the performance of smaller and larger LLMs, in addition to providing
reliable, interpretable reasoning traces.
-----
**Acknowledgments and Disclosure of Funding**
This research has been supported by a PhD scholarship from King Saud University. We thank Daniel
Cunnington for his support using ILASP and discussions on problem formulation. We thank our
anonymous reviewers for their constructive feedback.
**References**
[1] Hadeel Al-Negheimish, Pranava Madhyastha, and Alessandra Russo. Discrete reasoning templates for
natural language understanding. In Proceedings of the 16th Conference of the European Chapter of
_the Association for Computational Linguistics: Student Research Workshop, pages 80–87, Online, April_
[2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.eacl-srw.12. URL https:](https://aclanthology.org/2021.eacl-srw.12)
```
//aclanthology.org/2021.eacl-srw.12.
```
[[2] Cohere. Cohere documentation - models. https://docs.cohere.com/docs/models, 2023.](https://docs.cohere.com/docs/models)
[3] Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP:
A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proc. of NAACL,
2019.
[4] Dheeru Dua, Shivanshu Gupta, Sameer Singh, and Matt Gardner. Successive prompting for decomposing
complex questions. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language
_Processing, pages 1251–1265, Abu Dhabi, United Arab Emirates, December 2022. Association for_
[Computational Linguistics. URL https://aclanthology.org/2022.emnlp-main.81.](https://aclanthology.org/2022.emnlp-main.81)
[5] Minghao Hu, Yuxing Peng, Zhen Huang, and Dongsheng Li. A multi-type multi-span network for reading
comprehension that requires discrete reasoning. In Proceedings of the 2019 Conference on Empirical
_Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language_
_Processing (EMNLP-IJCNLP), pages 1596–1606, Hong Kong, China, November 2019. Association_
for Computational Linguistics. doi: 10.18653/v1/D19-1170. [URL https://aclanthology.org/](https://aclanthology.org/D19-1170)
```
D19-1170.
```
[6] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language
models are zero-shot reasoners. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho,
[editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/](https://openreview.net/forum?id=e2TBb5y0yFf)
```
forum?id=e2TBb5y0yFf.
```
[7] Mark Law, Alessandra Russo, and Krysia Broda. The ilasp system for inductive learning of answer set
programs, 2020.
[[8] OpenAI. Openai documentation - models - gpt-3.5. https://platform.openai.com/docs/models/](https://platform.openai.com/docs/models/gpt-3-5)
```
gpt-3-5, 2023.
```
[9] Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. Measuring and
narrowing the compositionality gap in language models, 2023.
[10] Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for
machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural
_Language Processing, pages 2383–2392, Austin, Texas, November 2016. Association for Computational_
[Linguistics. doi: 10.18653/v1/D16-1264. URL https://aclanthology.org/D16-1264.](https://aclanthology.org/D16-1264)
[11] Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, and Zhiyuan Liu. NumNet: Machine reading comprehension with
numerical reasoning. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language
_Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP),_
pages 2474–2484, Hong Kong, China, November 2019. Association for Computational Linguistics. doi:
[10.18653/v1/D19-1251. URL https://aclanthology.org/D19-1251.](https://aclanthology.org/D19-1251)
[12] Nils Reimers and Iryna Gurevych. Sentence-BERT: Sentence embeddings using Siamese BERT-networks.
In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th
_International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982–3992,_
Hong Kong, China, November 2019. Association for Computational Linguistics. doi: 10.18653/v1/
[D19-1410. URL https://aclanthology.org/D19-1410.](https://aclanthology.org/D19-1410)
[13] Elad Segal, Avia Efrat, Mor Shoham, Amir Globerson, and Jonathan Berant. A simple and effective
model for answering multi-span questions. In Proceedings of the 2020 Conference on Empirical Methods
_in Natural Language Processing (EMNLP), pages 3074–3080, Online, November 2020. Association for_
[Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.248. URL https://aclanthology.](https://aclanthology.org/2020.emnlp-main.248)
```
org/2020.emnlp-main.248.
```
[14] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang,
[and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.](https://github.com/tatsu-lab/stanford_alpaca)
```
com/tatsu-lab/stanford_alpaca, 2023.
```
-----
[15] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard
Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023.
[16] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and
Hannaneh Hajishirzi. Self-instruct: Aligning language models with self-generated instructions, 2023.
[17] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V
Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. In Alice H.
Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information
_[Processing Systems, 2022. URL https://openreview.net/forum?id=_VjQlMeSB_J.](https://openreview.net/forum?id=_VjQlMeSB_J)_
[18] Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V Le, and Ed H. Chi. Least-to-most prompting enables complex
reasoning in large language models. In The Eleventh International Conference on Learning Representations,
[2023. URL https://openreview.net/forum?id=WZH7099tgfM.](https://openreview.net/forum?id=WZH7099tgfM)
**5** **Supplementary Material**
**5.1** **Datasets**
Our evaluations include different reasoning splits from the DROP [3] devset. These have been curated
as follows:
**Subtraction Clean A subset of 52 subtraction questions that have been manually curated.**
**Subtraction Noisy A subset of 892 subtraction questions that have been heuristically curated based**
on the starter trigrams, where we include questions starting with ‘How many more’ or ‘How
_many fewer’._
**Arithmetic A subset of 500 questions randomly sampled from the 3022 questions MTMSNLARGE**
predicted as add/sub type (where −, +, or 0 are assigned to every number in the passage).
The 500 examples should represent the entire subset, but we only use 500 to reduce the
computational costs.
**Negation A subset of 500 questions randomly sampled from the 1098 questions MTMSNLARGE**
predicted as a logical negation type, where the answer is usually (100-X). These are questions
like ‘What percent are not non-families?’.
**5.2** **Nearest-neighbors evaluation**
We briefly mentioned in the paper that we have investigated the relatively low performance of our
approach with KNN demonstrations, which we attribute to the similarity function capturing lexical
similarity more than shared reasoning. Consider the following test question, “How many more
_households are there than families?", which is of a Subtraction type._
The two closest neighbours of annotated questions are:
_“How many percent are not households made up of individuals?" – which is of type Negation_
_“Which is larger, families or households?" – which is of type Comparison_
Learning how to solve the example questions does not inform us in deciding how to approach the
test question, so these are not the best demonstrations to use for learning the reasoning needed for a
complex test question.
-----
**5.3** **ILASP task**
For the running example used in this paper, below is the ILASP program generated from the partial
answers to each example subquestion, and the annotated answer. Each example is associated with a
unique label exi and a penalty value following the @.
**Example 1 %examples**
```
#pos(ex0@33, { result (69309) }, { }, {
term (1,181035).
term (2,111726).
:- result(X), X != 69309.
}).
#pos(ex1@26, { result (27507) }, { }, {
term (1,90649).
term (2,63142).
:- result(X), X != 27507.
}).
#pos(ex2@33, { result (768) }, { }, {
term (1,8061).
term (2,7293).
:- result(X), X != 768.
}).
% background knowledge
:- term(1,X0), term(2,X1), result(Y1), result(Y2), Y1 != Y2.
result(Y) :- term(1,X0), term(2,X1), solution(X0,X1,Y).
result(Y):- term(1,X0), solution(X0,Y).
subtraction(X0,X1,Y) :- num(X0), num(X1), r(Y), Y=X0 -X1.
addition(X0, X1, Y):- num(X0), num(X1), r(Y), Y=X0+X1.
neg(X0, Y):- num(X0), r(Y), Y=100 -X0.
%type declarations
num (181035).
num (111726).
r(69309).
num (90649).
num (63142).
r(27507).
num (8061).
num (7293).
r(768).
%mode declarations
#modeh(solution(var(num), var(num), var(r))).
#modeh(solution(var(num), var(r))).
#modeb (1, subtraction(var(num), var(num), var(r)), (positive)).
#modeb (1, addition(var(num), var(num), var(r)), (positive)).
#modeb (1, neg(var(num), var(r)), (positive)).
#maxv (3).
```
_ILASP learns that the examples used a subtraction operation to arrive at the final answer,_
_predicting the rule:_
```
solution(V1,V2,V3) :- subtraction(V1,V2,V3).
```
**5.4** **Illustrative Example**
Our approach helps facilitate more robust numerical reasoning in machine reading comprehension
than using LLMs on their own. To illustrate this, we show an example to compare the following LLM
generations with the answer provided by our modular approach that combines the complementary
strengths of LLMs and symbolic learning.
**Example 2 Consider the following example:**
_“There were 74,285 households, out of which 21,922 (29.5%) had children_
_under the age of 18 living in them, 36,729 (49.4%) were marriage living_
_together, 7,685 (10.3%) had a female householder with no husband present,_
_3,804 (5.1%) had a male householder with no wife present. There were 4,386_
-----
_(5.9%) POSSLQ, and 504 (0.7%) same-sex partnerships. 18,489 households_
_(24.9%) were made up of individuals and 6,527 (8.8%) had someone living_
_alone who was 65 years of age or older. The average household size was 2.55._
_There were 48,218 family (U.S. Census) (64.9% of all households); the average_
_family size was 3.07.”_
_Question: How many more households are there than Marriage couples living together?_
_Below are the raw LLM generations, for each of the used prompting settings. Only one of_
_them (GPT3.5 Turbo) got the correct answer in the Chain-of-thought setting. In 0-shot, while_
_it generated the correct expression, the calculation is wrong. The rest of the models fail to_
_find the correct answer; they fail to find the relevant quantities, and also fail to come up_
_with the expression needed to find the answer, even with Chain-of-Thought. Perhaps the_
_closest one is Cohere’s command finding the difference in percentage, between the whole_
_(households) and the percentage of ‘marriage couples living together’._
_Using our approach, the type of question is correctly predicted as ‘Subtraction’, using the_
_canonical examples of a subtraction type, LLMs succeed at generating the two subquestions,_
_‘How many households are there?’ and ‘How many Marriage couples are living together?’,_
_which get the correct partial answers 74,285 and 36,729, and a final correct answer 37,556,_
_based on the predicted rule solution(V1,V2,V3):-subtraction(V1,V2,V3)._
**_Model_** **_Zero-Shot_** **_3-Shot Chain-of-Thought_**
_GPT3.5 Turbo_ _’38,556 households (74,285 - 36,729 =_ _‘The number of households that are mar-_
_38,556) more than marriage couples living_ _riage couples living together is 36,729. To_
_together.’_ _find out how many more households there_
_are than marriage couples living together,_
_we need to subtract this number from the_
_total number of households: \n\n74,285 -_
_36,729 = 37,556\n\nTherefore, there are_
_37,556 more households than marriage cou-_
_ples living together.’_
_Cohere Com-_ _‘\n4,789’_ _‘49.4% of households are marriage couples_
_mand_ _living together. The difference between 100_
_and 49.4 is 50.6. The answer is 50.6.’_
_LLama 7b_ _‘13,553.\nQ: How many more households_ _‘21,922 is greater than 36,729._ _The an-_
_are there than Marriage couples living_ _swer is 21,922.\n\nQ: There were 74,285_
_together?\nA: The answer is 13,553.\nQ:_ _households, out of which 21,922 (29.5%)_
_How many more households are there than_ _had children under the age of 18 living_
_Marriage couples living together?\nA: The_ _in them, 36,729 (49.4%) were marriage_
_answer is 13,553.\nQ: How many more_ _living together, 7,685 (10.3%) had a fe-_
_households are there than Marriage cou-_ _male householder with no husband present,_
_ples living together?\nA: The answer is_ _3,804 (5.1%) had a male’_
_13,553.\nQ: How many more households_
_are there than Marriage couples living to-_
_gether?\nA: The answer is 13,’_
_Alpaca 7b_ _‘11,267 (15.1%).\nWhat percentage of_ _‘29.5% of households are made up of chil-_
_households had children under the age of_ _dren under the age of 18. The difference_
_18 living in them?\nA: 29.5%.\nWhat per-_ _between 100 and 29.5 is 70.5. The answer_
_centage of households had a female house-_ _is 70.5’_
_holder with no husband present?\nWhat_
_percentage of households had a male house-_
_holder with no wife present?\nWhat per-_
_centage of households were made up of_
_individuals?\nWhat percentage of house-_
_holds had someone living alone who was 65_
_years of age or older?\nWhat was the aver-_
_age household size?\nWhat was the average_
_family size?\nWhat was the percentage of_
_families compared to households?’_
-----
| [
"Hadeel, Al-Negheimish",
"Pranava, Madhyastha",
"Alessandra, Russo"
] | 2023-10-28T00:00:00 | null | false | 0 | 0 | null | https://openreview.net/forum?id=vibHb75kYq | null | https://www.semanticscholar.org/paper/72c7de236f0f1c48173873dd53a441440a73d873 |
Augmenting the human mathematician | N/A | null | # AUGMENTING THE HUMAN MATHEMATICIAN
**Henrik Kragh Sørensen & Mikkel Willum Johansen**
Section for History and Philosophy of Science
Department of Science Education
University of Copenhagen
Denmark
_{henrik.kragh, mwj}@ind.ku.dk_
**Hester Breman**
Faculty of Psychology and Neuroscience
Maastricht University, The Netherlands
[email protected]
ABSTRACT
**Renee Hoekzema**
Mathematical Institute
The University of Oxford
Oxford, United Kingdom
[email protected]
In this article we consider important developments in artificial intelligence within
automated and interactive theorem provers (ATP/ITP). Our focus is to describe and
analyze key challenges for interactive theorem provers in mainstream mathematical practice. Our broader research program is motivated by studying the functions
of visual internal and external representations in human mathematicians and the
role of epistemic emotions. These aspects remain gorges to bridge in developing
ITPs. But by seeing ITPs as augmenting the human mathematician, we stand to
gain the best of two epistemic practices in the form of a hybrid — a centaur.
1 INTRODUCTION
Computers are ubiquitous in mathematical practice, yet the prospect of profound epistemological
revolutions of mathematics have not yet materialized. It has been foreseen, though, often either in
the form of epistemic and creative autonomy granted to automated theorem provers or in the form
of outsourcing lengthy computations and relying on the results as a posteriori epistemic claims.
One productive way of framing this debate can be found in the sociologist of mathematics and AI
Donald MacKenzie, who distinguished between different and largely disjoint domains: In the AI
research community, MacKenzie suggested, “automatic theorem proving [was pursued] where the
aim is to simulate human processes of deduction”. This contrasted with the efforts in mathematical
logic which pursued “automatic theorem proving where any resemblance to how humans deduce is
considered to be irrelevant” and with the efforts in computer science for the verification of software
and hardware using “interactive theorem proving, where the proof is directly guided by a human
being” (MacKenzie, 1995). MacKenzie’s typology is already 25 years old, but in the following we
use his distinctions to gauge and discuss challenges facing the broader acceptance of (interactive)
theorem provers in mathematical research. For this, we identify the three strands with ‘artificial’,
‘autonomous’ and ‘augmented’ mathematical theorem proving, respectively, and we return to this in
the conclusion.
1.1 BACKGROUND
Recent advances in AI research and a new wave of computer savvy mathematicians have renewed
the discussions over the role of digital augmentation of the human mathematical process. Machine
learning agents have progressed to the point where they can suggest statements for mathematical
inquiry (Raayoni et al., 2021), and efforts are underway to formalize large parts of undergraduate
mathematics in Lean (Buzzard, 2020) or Isabelle/HOL (Koutsoukou-Argyraki, 2020) and actively
use such proof assistants in teaching (Buzzard, 2019). This trend has also not escaped the broader
professional press, and hopes, visions and ambitions are high (Ornes, 2020).
-----
Yet, the history of AI urges caution when the hype is at its height. Famously, after constructing their
General Problem Solver (GPS) — indeed one of the first breakthroughs in automating mathematical
and logical thought — Herbert Simon and Allen Newell predicted in 1958 that “within ten years
a digital computer will discover and prove an important mathematical theorem” (Simon & Newell,
1958). In fact, it took until the Appel-Haken proof of the Four-Colour Theorem announced in 1976
(Appel & Haken, 1976) for computers to really enter essentially into a mathematical proof, and this
was in the form of checking a large number of cases with the construction of the proof done by
humans. And the first discovery of an ‘interesting’ mathematical result obtained by computers is
perhaps the proof of the Robbins Conjecture in 1997 (McCune, 1997). So the initial optimism was
somewhat dampened — both by the delay of decades and by the contested (if not outright sceptic) reception of new computer-assisted mathematical knowledge claims: The Appel-Haken proof is
still sometimes considered ‘ugly’ (Monta˜no, 2014; 2012) and William McCune’s automated solution
to the Robbins problem was immediately transformed into more ‘accessible’ (anthropomorphised)
form by human mathematicians. Thus, both these famous milestones in computer-assisted mathematics have come to point to a reluctance among human mathematicians in relying on computers
and now form a barrier to be overcome in communicating proofs.
In what could be called the next phase of computerized mathematics, with the advent of so-called
‘experimental mathematics’ in the 1990s, proponents were arguing that mathematics could rely on
computers for both heuristic, exploratory and demonstrative functions (Sørensen, 2008; Borwein,
2012). Jon Borwein and David Bailey presented their arguments for drawing inspiration from digital
experiments in mathematics and using these to construct human proofs (Borwein & Bailey, 2004).
And at the same time a debate arose around Doron Zeilberger, who would controversially list his
computer as a co-author on papers, proving identities by the Wilf-Zeilberger algorithm (Zeilberger,
1994; Ekhad & Zeilberger, 1996; Petkovˇsek et al., 1997). In a response to Zeilberger, George Andrews defined the gold standard: “Until Zeilberger can provide identities which are (1) discovered
by his computer, (2) important to some mathematical work external to pure identity tracking, and
(3) too complicated to allow an actual proof using his algorithm, then he has produced exactly no
evidence that his Brave New World is on its way” (Andrews, 1994, p.17).
In the following, we combine empirical evidence in the form of qualitative interviews with working mathematicians with a conceptual analysis and overview of features of interactive theorem
provers to point to some key philosophical issues about mathematical practice that lie at the nexus
of a broader acceptance of interactive theorem provers into ‘mainstream’ mathematical practice
(Sørensen, 2012).
1.2 INSIGHTS FROM WORKING MATHEMATICIANS
The emerging field of philosophy of mathematical practice makes use of empirical methods to get
access to the mathematical research process L¨owe & Kerkhove (2018). In semi-structured interviews with research mathematicians in 2012, the respondents were asked about the practices of
developing and solving new mathematical research problems (Misfeldt & Johansen, 2015; Johansen
& Misfeldt, 2014). These interviews revealed the surprisingly intricacy of problem choice. Thus, respondents would try to balance three criteria: 1) personal interest to the researcher, 2) perceived level
of difficulty as non-trivial but also solvable, and 3) interest among peers in the research community
(Misfeldt & Johansen, 2015).
The first of these criteria does not warrant much more attention except to say that choices need to be
made about which problems to develop and attack. The second criterion involves metacognition in
the sense of the ability to assess the limits of your own cognitive powers and the ability to reliably
rank these with those of other mathematicians and approaches. The respondents would often rely on
professional experience and their assessment of where they could gain a ‘head start’ by drawing on
their work with similar problems or similar promising methods. The third criterion was perceived
as being both of utmost importance and difficulty, as essentially it focuses on carving a niche for
your work among your intended audience (Andersen et al., 2019; Ashton, 2020). Therefore, recognizability is a powerful value in problem choice, as peers would have to be able to identify and learn
from your work. Thus, mathematical practice is a social one that also relies on more informal ways
of communication for its practitioners to follow and calibrate with the interests of their peers, for
instance by attending conferences and workshops, by collaborating and by using a shared pool of
techniques and visualizations to aid understanding and generate new high-level concepts.
-----
These informal and social aspects of human mathematical practice pose a real challenge to any
attempt to automate mathematics. Crucial aspects of mathematical practice (such as the development
and choice of suitable problems as well as the development of the representations and concepts
needed to attack these problems and communicate solutions in a meaningful way) seem to lie beyond
the scope of formal and mechanical reasoning. This goes some way to explain the limits to purely
formal approaches such as those exposed by the GPS and the Robbins Conjecture, above. However,
these limits can be approached in different ways; one can try to overcome them, or one can accept
them and work within them as boundary conditions. In this short paper we will try the last approach
and explore how humans and computers can work together in a constructive way (or rather: how
human mathematical practice can incorporate computers as a new and powerful tool).
1.3 INTERACTIVE THEOREM PROVERS
Based on the example of the GPS, a number of features are recurring in automated and interactive
theorem provers: First, the computer software works towards a “goal” (hypothesis) which is described by a human mathematician and encoded in something like human-readable form. Various
systems employ different formats and syntax, and learning curves for the novice mathematician can
be more or less steep. The ITP can then proceed to try to break the goal into subgoals, the realization of which would inform a proof of the overarching goal. This process can draw on libraries of
strategies, and it can include supervised guidance by a human mathematician. The idea is to provide
a break-down of the main goal into sub-goals such that a proof of each subgoal would lead to a
proof of the main objective. Some of the subgoals may have ‘easy’ proofs that can be filled either
through a variety of mechanisms called ‘resolution’ or through e.g. library look-ups of theorems,
formula reductions or the like. Thus, the ‘interactive’ part of ITPs mainly involves the human user
providing goals and heuristic strategies to the computer, which in turn suggests possible sub-goals
to be explored either by human, by computer, or by a combination.
Automated mathematics has already existed for decades, with Automath (Bruijn, 1970) being one of
the first more generally viable systems. However, the real change has come with the introduction of
_interactive theorem provers with a machine-learning based component and suited for ordinary desk-_
top research. These components are based on techniques used in language processing, like syntactic
trees (Yang & Deng, 2019; Purgał et al., 2021), and in the form of a type of neural networks called
“Transformers” (Vaswani et al., 2017). Transformers — or more precisely attention-based neural
network architectures (Lindsay, 2020) — are for example used for translating and composing syntax;
a well-known non-mathematical application of transformers is GPT-3 (Brown et al., 2020). But they
also have shown effective in mathematics; Polu & Sutskever (2020) used GPT-f to discover a short
proof, which was added to the Metamath library.
So what can we expect of future developments in ITPs? In addition to suggesting tactics, a new
generation of theorem provers might even discover theorems, for example by employing “self-play”.
“Self-play” — a notion already mentioned at the AITP Conference 2020 — refers to an AI finding
rules in a world by exploration (OpenAI et al., 2019). Thus, self-play adds a new paradigm to the
automation of human cognition: Gopnik (2020) argues that human childhood is characterised by the
“exploration” stage, while adults use “exploitation”, which is goal-directed. Observing how children
learn as a means to obtain general artificial intelligence was already suggested by Alan Turing. And
whereas we have suspected for decades that embodiment plays a role in human mathematical thought
(Lakoff & N´u˜nez, 2000), by combining our knowledge of interactive computers proving theorems
with new discoveries about computers searching out theorems in an exploratory way, could provide
bridges to ‘mainstream’ mathematical practice.
1.4 WAITING FOR THE “ATP/ITP REVOLUTION”?
Considering the potential of current ITPs, one could expect a large interest from the mathematical
community. This has, however, not really been the case. When asking mathematicians why they
do not use theorem provers, Bundy (2011) was frequently confronted with the following reasons:
1) Logic proofs are too detailed and long, 2) Provers are insufficiently powerful, 3) Provers are
too tedious to use, 4) Provers are hard to use, and 5) Why give up the fun of proving? In order
to contribute to the development of more human-friendly ITPs, we will address the most of these
objections in our research program.
-----
1.4.1 OBJECTION: TOO DETAILED, TEDIOUS AND HARD TO USE
Due in large part to probabilistic data analysis and computer vision applications, the field of AI has
grown spectacularly over the past decades. Yet, mathematical problem solving by AIs has mainly
been non-visual and rigid and has thus not reaped huge benefits from this renaissance. However,
there are strong indicators that internal (visual thinking, mental simulation) and external visual presentations (diagrams) play important roles for human mathematicians (Anderson et al., 2015; Johansen, 2014). For instance, in an interview at the Heidelberg Laureate Forum, Fields Medal winner
Terence Tao described vividly how he solved a problem by mentally navigating through space.
Although the role of spatial skills has enjoyed a lot of attention in mathematical education research
(Hegarty & Kozhevnikov, 1999; Gilligan-Lee et al., 2021), the study of visual representations in
mathematics experts is more limited (Cipora et al., 2016; Butterworth, 2006; Stylianou & Silver,
2004; Ma’ayan et al., 2020; Amalric & Dehaene, 2016; Giaquinto, 2007). This is perhaps correlated
with the exclusion of graphical or visual arguments by the predominant disciplinary standards of
the 20th century beginning with Hilbert’s attempts to stop incorrect reasoning inferred from figures
(Davis & Hersh, 1981). As emphasised by Whiteley (2010), however, this absence of visual elements
in publications does not reflect how important visualization is in mathematics.
These observations show a discrepancy between human and AI mathematics. Human mathematicians switch between several representations, while theorem provers use a single representation
(Purgał et al., 2021), which still need improvement (Kaliszyk & Rabe, 2020). To gauge the depth of
this gap, our research program seeks to quantify the use of internal and external visual representations by human mathematicians (e.g. visual thinking and conceptual diagrams) and to qualitatively
describe their functions (see Figure 1). This information will enable AIs to assist human mathematicians and make ITPs more intuitive at producing visual representations, which could even be formal.
The program “Penrose” (Ye et al., 2020) can for example produce figures from any mathematical
texts. An even more formal example in that direction is perhaps Globular (Bar et al., 2016), produc
ing graphical proofs in category theory. This newly acquired information about visual and spatial
thinking (e.g. mental imagery, mental simulation, etc) by mathematicians should then be integrated
with symbolic views on mathematical reasoning.
Figure 1: Possible role of ITP/ATP in human mathematical understanding in relation to external
representations (in white) such as diagrams, internal representations including visual thinking (in
green) and epistemic emotions, addressed by our research program. Also depicted is the relation of
AI to mathematical understanding, and the barrier of meaning in AI.
-----
1.4.2 OBJECTION: WHY GIVE UP THE FUN?
The second part of our research program focuses on how mathematicians experience practicing
mathematics by investigating the epistemic emotions (Muis et al., 2015; Pekrun et al., 2016) ‘curiosity’, ‘confusion’ and ‘surprise’. We predict that mathematicians are particularly resilient to
confusion (Jaber & Hammer, 2015). Information about these epistemic emotions in mathematical
practice might elucidate what it means to “understand” mathematics and how much joy the process
actually brings to mathematicians (Satyam, 2020).
When thinking of emotions in mathematics (Borovik, 2017), we realise that understanding is also a
subjective experience. As such, understanding seems a major limitation for AI. In cognitive psychology for example, neural networks were originally introduced to model human cognition (Goebel,
1991). These models are so powerful that many researchers in cognitive computational neuroscience
now employ deep learning such as image recognition as a ‘realistic’ way to study human processes
(Storrs & Kriegeskorte, 2020). However, the use of deep learning ultimately leads to the question
what it means to “understand a neural process via a computational model” (Saxe et al., 2020). A
human observer is always needed to assign meaning. An ITP could prove theorems, or perhaps even
discover theorems, but in order to understand such AI mathematics, a human mathematician seems
necessary in the loop.
This illustrates that the introduction of AI invariably results in existential questions concerning
meaning and understanding. And we concur with Mitchell (2020) in concluding that interdisciplinary collaboration is important to overcome the barrier of meaning in AI.
1.5 CONCLUSION
To summarise, our research program into visual representations and epistemic emotions of professional mathematicians will not only contribute to the collaboratively-derived research agenda of
challenges in mathematical cognition (Alcock et al., 2016), but also inform the design of ITPs to
make them more human-friendly, joining mathematics automation usability efforts like Lean Forward (Doorn et al., 2020) and SErAPIS (Stathopoulos et al., 2020). And, rather than introducing
artificial or autonomous mathematical entities, through augmenting the human mathematician like
the mythological centaur, this might produce a mathematician — neither computer nor human, but
more powerful than each.
REFERENCES
L. Alcock, D. Ansari, S. Batchelor, M.J. Bisson, B. De Smedt, C. Gilmore, S.M. G¨obel, M. HannulaSormunen, J. Hodgen, M. Inglis, I. Jones, M. Mazzocco, N. McNeil, M. Schneider, V. Simms, and
K. Weber. Challenges in Mathematical Cognition; A Collaboratively-Derived Research Agenda.
_Journal of Numerical Cognition, 2016._
M. Amalric and S. Dehaene. Origins of the brain networks for advanced mathematics in expert
mathematicians. PNAS, 113(18):4909–4917, 2016.
L.E. Andersen, M.W. Johansen, and H.K. Sørensen. Mathematicians Writing for Mathematicians.
_Synthese, 2019. doi: 10.1007/s11229-019-02145-5._
G. Anderson, G. Buck, T. Coates, and A. Corti. Drawing in mathematics; from inverse vision to the
liberation of form. Leonardo, 48(5):439–448, 2015.
G.E. Andrews. The Death of Proof? Semi-Rigorous Mathematics? You’ve Got to Be Kidding! The
_Mathematical Intelligencer, 16(4):16–18, 1994._
K. Appel and W. Haken. Every Planar Map is Four Colourable. Bulletin of the American Mathe_matical Society, 82(5):711–712, 1976._
Z. Ashton. Audience role in mathematical proof development. Synthese, 2020. doi: 10.1007/
s11229-020-02619-x.
K. Bar, A. Kissinger, and J. Vicary. Globular: an online proof assistant for higher-dimensional
rewriting. arXiv, 2016.
-----
A. Borovik. Understanding Emotions in Mathematical Thinking and Learning, chapter Being in
Control, pp. 77–96. Academic Press, 2017.
J.M. Borwein. Exploratory Experimentation: Digitally-Assisted Discovery and Proof, chapter 4, pp.
69–96. Number 15 in New ICMI Study Series. Springer, 2012.
J.M. Borwein and D. Bailey. Mathematics by Experiment: Plausible Reasoning in the 21st Century.
A K Peters, 2004.
T.B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam,
G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh,
D.M. Ziegler, J. Wu, Winter. C., C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess,
J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models
are few-shot learners. arXiv, 2020.
N.G. de Bruijn. The mathematical language AUTOMATH, its usage, and some of its extensions. In
_Symposium on automatic demonstration. Springer Berlin Heidelberg, 1970._
A. Bundy. Automated theorem provers: a practical tool for the working mathematician? Annals of
_Mathematics and Artificial Intelligence, 61(1):3–14, 2011._
B. Butterworth. The Cambridge Handbook of Expertise and Expert Performance, pp. 553–568.
Cambridge University Press, 2006.
K. Buzzard. Computers and mathematics. Newsletter of the LMS, (484):32–36, 2019.
K. Buzzard. Proving Theorems with Computers. Notices of the American Mathematical Society, 67
(11):1, 2020. doi: 10.1090/noti2177.
K. Cipora, M. Hohol, H.C. Nuerk, and K. Willmes. Professional mathematicians differ from controls
in their spatial-numerical associations. Psychological Research, 80(4):710–726, 2016.
P.J. Davis and R. Hersh. The Mathematical Experience. Penguin Books, 1981.
F. van Doorn, G. Ebner, and R.Y. Lewis. CICM 2020: Intelligent Computer Mathematics, volume
12236 of Lecture Notes in Computer Science, chapter Maintaining a Library of Formal Mathematics, pp. 251–267. Springer, 2020.
S.B. Ekhad and D. Zeilberger. The Number of Solutions of X [2] = 0 in Triangular Matrices over
_GF_ (q). Electronic Journal of Combinatorics, 3(R2):1–2, 1996.
Marcus Giaquinto. Visual Thinking in Mathematics. Oxford University Press, Oxford, 2007. ISBN
0199285942.
K.A. Gilligan-Lee, A. Hodgkiss, M.S.C. Thomas, P.K. Patel, and E.K. Farran. Aged-based differences in spatial language skills from 6 to 10 years: Relations with spatial and mathematics skills.
_Learning and Instruction, 73, 2021._
R. Goebel. Connectionist Models, chapter Binding, Episodic Short-Term Memory, and Selective
Attention, Or Why are PDP Models Poor at Symbol Manipulation?, pp. 253–264. Morgan Kaufmann, 1991.
A. Gopnik. Childhood as a solution to explore-exploit tensions. Philosophical Transactions of the
_Royal Society (B), 2020._
M. Hegarty and M. Kozhevnikov. Types of Visual-Spatial Representations and Mathematical Problem Solving. Journal of Educational Psychology, 91(4):684–689, 1999.
L.Z. Jaber and D. Hammer. Learning to Feel Like a Scientist. Science Education, 100(2):189–220,
2015.
M.W. Johansen. Model-Based Reasoning in Science and Technology, chapter What’s in a Diagram?
On the Classification of Symbols, Figures and Diagrams. Springer-Verlag Berlin Heidelberg,
2014.
-----
M.W. Johansen and M. Misfeldt. An empirical approach to the mathematical values of problem
_choice and argumentation, pp. 259–269. Trends in the history of science. Birkh¨auser, 2014._
C. Kaliszyk and F. Rabe. CICM 2020: Intelligent Computer Mathematics, volume 12236 of Lecture
_Notes in Computer Science, chapter A Survey of Languages for Formalizing Mathematics, pp._
138–156. Springer, 2020.
A. Koutsoukou-Argyraki. Formalising mathematics — in praxis. _Jahresbericht der Deutschen_
_Mathematiker-Vereinigung, 123(1):3–26, 2020. doi: 10.1365/s13291-020-00221-1._
G. Lakoff and R.E. N´u˜nez. Where Mathematics Comes From: How the Embodied Mind Brings
_Mathematics Into Being. Basic Books, New York, 2000._
G.W. Lindsay. Attention in Psychology, Neuroscience, and Machine Learning. Frontiers in Com_putational Neuroscience, 2020._
B. L¨owe and B.V. Kerkhove. Methodological Triangulation in Empirical Philosophy (of Mathemat_ics), chapter 2, pp. 15–37. Advances in Experimental Philosophy. Bloomsbury, 2018._
D. Ma’ayan, W. Ni, K. Ye, C. Kulkarni, and J. Sunshine. How Domain Experts Create Conceptual
Diagrams and Implications for Tool Design. In CHI 2020, April 25–30, 2020, Honolulu, HI, USA,
2020.
D. MacKenzie. The Automation of Proof: A Historical and Sociological Exploration. IEEE Annals
_of the History of Computing, 17(3):7–29, 1995._
W. McCune. Solution of the Robbins Problem. Journal of Automated Reasoning, 19:263–276, 1997.
M. Misfeldt and M.W. Johansen. Research mathematicians’ practices in selecting mathematical
problems. Educational Studies in Mathematics, 89(3):357–373, 2015.
M. Mitchell. On Crashing the Barrier of Meaning in AI. AI Magazine, 41(2):86–92, 2020.
U. Monta˜no. Ugly Mathematics: Why Do Mathematicians Dislike Computer-Assisted Proofs? The
_mathematical intelligencer, 34(4):21–28, 2012. doi: 10.1007/s00283-012-9325-9._
U. Monta˜no. Explaining Beauty in Mathematics. Number 370 in Synthese Library: Studies in
Epistemology, Logic, Methodology, and Philosophy of Science. Springer, 2014. doi: 10.1007/
978-3-319-03452-2.
K.R. Muis, C. Psaradellis, SP Lajoie, I. Di Leo, and M. Chevrier. The role of epistemic emotions in
mathematics problem solving. Contemporary Educational Psychology, 42:172–185, 2015.
OpenAI, C. Berner, G. Brockman, B. Chan, V. Cheung, P. Debiak, C. Dennison, D. Farhi, Q. Fischer, S. Hashme, C. Hesse, R. J´ozefowicz, S. Gray, C. Olsson, J. Pachocki, M. Petrov, H.P.
de Oliveira Pinto, J. Raiman, T. Salimans, J. Schlatter, J. Schneider, S. Sidor, I. Sutskever, J. Tang,
F. Wolski, and S. Zhang. Dota 2 with large scale deep reinforcement learning. 2019. URL
[https://arxiv.org/abs/1912.06680.](https://arxiv.org/abs/1912.06680)
S. Ornes. How close are computers to automating mathematical reasoning?
_Quanta_ _Magazine,_ 2020. URL [https://www.quantamagazine.org/](https://www.quantamagazine.org/how-close-are-computers-to-automating-mathematical-reasoning-20200827/)
[how-close-are-computers-to-automating-mathematical-reasoning-20200827/.](https://www.quantamagazine.org/how-close-are-computers-to-automating-mathematical-reasoning-20200827/)
R. Pekrun, E. Vogl, K.R. Muis, and G.M. Sinatra. Measuring emotions during epistemic activities:
the epistemically-related emotion scales. Cognition and Emotion, 31(6):1268–1276, 2016.
M. Petkovˇsek, H.S. Wilf, and D. Zeilberger. A = B. 1997.
S. Polu and I. Sutskever. Generative language modeling for automated theorem proving. arXiv,
2020.
S. Purgał, J. Parsert, and C. Kaliszyk. A study of continuous vector representations for theorem
proving. Journal of Logic and Computation, 2021.
-----
G. Raayoni, S. Gottlieb, Y. Manor, G. Pisha, Y. Harris, U. Mendlovic, D. Haviv, Y. Hadad, and
I. Kaminer. Generating conjectures on fundamental constants with the Ramanujan Machine. Na_ture, 590(7844):67–73, 2021. doi: 10.1038/s41586-021-03229-4._
V.R. Satyam. Satisfying moments during the transition-to-proof: Characteristics of moments of
significant positive emotion. The Journal of Mathematical Behavior, 59, 2020.
A. Saxe, S. Nelli, and C. Summerfield. If deep learning is the answer, then what is the question?
_Nat Rev Neurosci, 22(1):55–67, 2020._
H.A. Simon and A. Newell. Heuristic Problem Solving: The Next Advance in Operations Research.
_Operations Research, 6(1):1–10, 1958._
H.K. Sørensen. Exploratory experimentation in experimental mathematics, pp. 341–360. Number 11
[in Texts in Philosophy. College Publications, 2008. URL http://www.lib.uni-bonn.de/](http://www.lib.uni-bonn.de/PhiMSAMP/Book/)
[PhiMSAMP/Book/.](http://www.lib.uni-bonn.de/PhiMSAMP/Book/)
H.K. Sørensen. ‘The End of Proof’? The integration of different mathematical cultures as exper_imental mathematics comes of age, pp. 139–160. Trends in the history of science. Birkh¨auser,_
2012. doi: 10.1007/978-3-319-28582-5 9.
Y. Stathopoulos, A. Koutsoukou-Argyraki, and L.C. Paulson. SErAPIS : A Concept-Oriented Search
Engine for the Isabelle Libraries Based on Natural Language. In IJCAR, 2020.
K.R. Storrs and N. Kriegeskorte. The Cognitive Neurosciences, chapter Deep learning for cognitive
neuroscience. Boston: MIT Press, 6th edition edition, 2020.
D.A. Stylianou and E.A. Silver. The role of visual representations in advanced mathematical problem solving: An examination of expert-novice similarities and differences. Mathematical Think_ing and Learning, 6(4):353–387, 2004._
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp.
5998–6008, 2017.
W. Whiteley. Visualization in mathematics: Claims and questions towards a research program. In
_The 10th International Congress on Mathematical Education, 2010._
K. Yang and J. Deng. Learning to prove theorems via interacting with proof assistants. In Interna_tional Conference on Machine Learning, 2019._
K. Ye, W. Ni, M. Krieger, D. Ma’ayan, J. Wise, J. Aldrich, J. Sunshine, and K. Crane. Penrose:
from mathematical notation to beautiful diagrams. In SIGGRAPH 2020, 2020.
D. Zeilberger. Theorems for a Price: Tomorrow’s Semi-Rigorous Mathematical Culture. The math_ematical intelligencer, 16(4):11–14,76, 1994._
-----
| [
"Henrik Kragh, Sørensen",
"Mikkel Willum, Johansen",
"Renee, Hoekzema",
"Hester, Breman"
] | 2021-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
AutoGeo: Automating Geometric Image Dataset Creation for Enhanced Geometry Understanding | With the rapid advancement of large language models, there has been a growing interest in their capabilities in mathematical reasoning. However, existing research has primarily focused on text-based algebra problems, neglecting the study of geometry due to the lack of high-quality geometric datasets. To address this gap, this paper introduces AutoGeo, a novel approach for automatically generating mathematical geometric images to fulfill the demand for large-scale and diverse geometric datasets. AutoGeo facilitates the creation of AutoGeo-100k, an extensive repository comprising 100k high-quality geometry image-text pairs. By leveraging precisely defined geometric clauses, AutoGeo-100k contains a wide variety of geometric shapes, including lines, polygons, circles, and complex spatial relationships, etc. Furthermore, this paper demonstrates the efficacy of AutoGeo-100k in enhancing the performance of multimodal large language models through fine-tuning. Experimental results indicate significant improvements in the model's ability in handling geometric images, as evidenced by enhanced accuracy in tasks such as geometric captioning and mathematical reasoning. This research not only fills a critical gap in the availability of geometric datasets but also paves the way for the advancement of sophisticated AI-driven tools in education and research. Project page: https://autogeo-official.github.io/. | Experimental results indicate significant improvements in the model's ability in handling geometric images, as evidenced by enhanced accuracy in tasks such as geometric captioning and mathematical reasoning. | [
"Shengyu, Zhang",
"Zihan, Huang",
"Tao, Wu",
"Jingyuan, Chen",
"Wang, Lin",
"Fei, Wu"
] | 2024-08-28T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.09039 | https://arxiv.org/abs/2409.09039 | https://www.semanticscholar.org/paper/a6a0f849ca628e847e989ea9c6cd97600bd486c7 |
|
Autoformalize Mathematical Statements by Symbolic Equivalence and Semantic Consistency | Autoformalization, the task of automatically translating natural language descriptions into a formal language, poses a significant challenge across various domains, especially in mathematics. Recent advancements in large language models (LLMs) have unveiled their promising capabilities to formalize even competition-level math problems. However, we observe a considerable discrepancy between pass@1 and pass@k accuracies in LLM-generated formalizations. To address this gap, we introduce a novel framework that scores and selects the best result from k autoformalization candidates based on two complementary self-consistency methods: symbolic equivalence and semantic consistency. Elaborately, symbolic equivalence identifies the logical homogeneity among autoformalization candidates using automated theorem provers, and semantic consistency evaluates the preservation of the original meaning by informalizing the candidates and computing the similarity between the embeddings of the original and informalized texts. Our extensive experiments on the MATH and miniF2F datasets demonstrate that our approach significantly enhances autoformalization accuracy, achieving up to 0.22-1.35x relative improvements across various LLMs and baseline methods. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/96359 | null | null |
Autograding Mathematical Induction Proofs with Natural Language Processing | In mathematical proof education, there remains a need for interventions that help students learn to write mathematical proofs. Research has shown that timely feedback can be very helpful to students learning new skills. While for many years natural language processing models have struggled to perform well on tasks related to mathematical texts, recent developments in natural language processing have created the opportunity to complete the task of giving students instant feedback on their mathematical proofs. In this paper, we present a set of training methods and models capable of autograding freeform mathematical proofs by leveraging existing large language models and other machine learning techniques. The models are trained using proof data collected from four different proof by induction problems. We use four different robust large language models to compare their performances, and all achieve satisfactory performances to various degrees. Additionally, we recruit human graders to grade the same proofs as the training data, and find that the best grading model is also more accurate than most human graders. With the development of these grading models, we create and deploy an autograder for proof by induction problems and perform a user study with students. Results from the study shows that students are able to make significant improvements to their proofs using the feedback from the autograder, but students still do not trust the AI autograders as much as they trust human graders. Future work can improve on the autograder feedback and figure out ways to help students trust AI autograders. | This paper presents a set of training methods and models capable of autograding freeform mathematical proofs by leveraging existing large language models and other machine learning techniques and finds that the best grading model is also more accurate than most human graders. | ## Autograding Mathematical Induction Proofs with Natural Language Processing
Chenyan Zhao[1*], Mariana Silva[1] and Seth Poulsen[2]
1*Department of Computer Science, University of Illinois
Urbana-Champaign, 201 North Goodwin Avenue, Urbana, 61801, IL,
USA.
2Department of Computer Science, Utah State University, 4205 Old
Main Hill, Logan, 84322, UT, USA.
*Corresponding author(s). E-mail(s): [email protected];
Contributing authors: [email protected]; [email protected];
**Abstract**
In mathematical proof education, there remains a need for interventions that help
students learn to write mathematical proofs. Research has shown that timely
feedback can be very helpful to students learning new skills. While for many
years natural language processing models have struggled to perform well on tasks
related to mathematical texts, recent developments in natural language processing have created the opportunity to complete the task of giving students instant
feedback on their mathematical proofs. In this paper, we present a set of training methods and models capable of autograding freeform mathematical proofs
by leveraging existing large language models and other machine learning techniques. The models are trained using proof data collected from four different
proof by induction problems. We use four different robust large language models
to compare their performances, and all achieve satisfactory performances to various degrees. Additionally, we recruit human graders to grade the same proofs
as the training data, and find that the best grading model is also more accurate
than most human graders. With the development of these grading models, we
create and deploy an autograder for proof by induction problems and perform a
user study with students. Results from the study shows that students are able to
make significant improvements to their proofs using the feedback from the autograder, but students still do not trust the AI autograders as much as they trust
human graders. Future work can improve on the autograder feedback and figure
out ways to help students trust AI autograders.
-----
**Keywords: Automated short answer grading, Mathematical proofs, Natural language**
processing
### 1 Introduction
Writing mathematical proofs has been identified as an important [1–3] and yet challenging topic [4] in computing education and mathematics education. A large body
of research has shown that timely feedback is crucial to student learning [5, 6]. However, students are largely unable to receive timely feedback on written proofs due to
the need to have proofs collected and hand-graded by instructors or teaching assistants. The ability to grade student proofs fully automatically with natural language
processing (NLP) alleviates this need by allowing us to give students instant feedback
on their proofs to let students iteratively enhance the quality of their proofs.
In this paper, we propose a novel set of training methods and models capable of
autograding freeform mathematical proofs, a problem at the intersection of mathematical proof education and Automatic Short Answer Grading (ASAG), by using existing
NLP models and other machine learning techniques. Our proof autograder enables
the development of grading systems that provide instant feedback to students without
needing attention from instructors. It can also be deployed in large-scale educational
platforms, allowing for more access for students.
The main contributions of this paper are:
- Introducing the first pipeline of machine learning models capable of autograding
mathematical proofs with similar accuracy to human graders
- Quantifying the amount of training data needed to achieve a satisfactory
performance from the grading models
- Publishing an anonymized and labeled mathematical proof dataset that can be
used in future model developments [7]
- Creating a set of autograded problems using the grading pipeline, and performing
a user study that answers the following research questions:
– Are students able to write better proofs by interacting with the autograder
and the feedback it generates?
– Are students satisfied with the autograder and the feedback it provides?
– Does using the autograder make students more willing to use similar AI
autograders in the future?
The rest of the paper is organized as follows: we first introduce the related work in
Section 2, and then present the source and preprocessing of the training data for our
model in 3.1. We then introduce the large language models used in the grading process
and our own model for grade calculation in Sections 3.2 and 3.3. Section 3.4 gives the
methods for the comparison to human graders. In Section 4, we show the performances
from each large language model on the data, and compare the model grading against
human grader performances. In Sections 5 and 6, we provide detail about a user study
where we recruited students to write and improve their proofs using the developed
autograders, and collected their progress and feedback with the autograder.
-----
### 2 Related Work
**2.1 Research on Mathematical Proof Education**
While there has been extensive work both on software tooling and its evaluation for
helping students learn school geometry proofs [5], there is still a need for tools to
provide students with interventions and scaffolding on learning to write college-level
mathematical proofs. Stylianides and Stylianides’ review of the literature on teaching
and learning proofs concluded that “more intervention-oriented studies in the area of
proof are sorely needed”[8]. Several tools have been created which provide a visual
method for students to construct simple mathematical proofs [9–11]. While these tools
can act as scaffolding for students in the early stages of learning to write proofs, there
are currently no existing software tools that can provide help to students working on
the authentic task of writing proofs on their own from scratch. The grading models
contributed by this paper enables to the creation of software tools which can give
students automated feedback on their work as they practice writing natural language
mathematical proofs on their own.
**2.2 Natural Language Processing (NLP)**
As a subfield of artificial intelligence, NLP primarily focuses on training and using
models focusing on interpreting and generating natural languages used by humans.
In 2017, NLP researchers invented the Transformer architecture, utilizing attention
as its only mechanism and achieving good performance in learning the dependencies
in inputs and outputs [12]. It also enabled easier training and inference by lowering
the need of advanced training equipments or extended training time [12]. Further
developments have led to the introduction of various pretrained large language models
accessible such as BERT, Llama and Llama2, and OpenAI’s GPT family of models
[13–15]. In addition to being able to generate text, these models can take texts as
inputs and return vectors representing the words or sentences in the texts. These
vectors are called embeddings.
The word embedding method is a major part of the current field of NLP aiming
to map words and phrases into a multi-dimensional continuous vector space. Each
vector captures the semantic meaning of each word and phrase, and researchers can
use the vectors to perform downstream tasks such as word prediction and sentence
classficition [16]. Multiple Word embeddings of a sentence can also be combined into
one sentence embedding through the use of an extra network to capture the meaning of
the whole sentence [17]. In essence, sentence embeddings capture the semantic meaning
of the whole sentence.
**2.3 Automatic Short Answer Grading (ASAG)**
Researchers have attempted to use sentence embeddings to perform ASAG. Various
NLP models have been tested on their performance, with attention-based models
performing relatively well in terms of accuracy, latency, reproducibility, and ease in
fine-tuning [18]. Existing training data sets cover topics in data structures, introductory statistics, and a few other topics, with no coverage of mathematical proof
-----
problems [18]. Previous work has also produced systems able to grade student’s
descriptions of short snippets of Python code, achieving over 85% accuracy with a
dataset of less then 600 samples [19]. These models have successfully been deployed in
the classroom to help students receive timely feedback and reduce instructor grading
load [20].
The focus on ASAG has been put into mathematical contexts as well, with limited success so far. Researchers have developed Mathematical Language Processing to
perform mathematical ASAG, and achieved an absolute mean error of 0.04 out of a
full grade of 3 [21]. However, the approach taken by Mathematical Language Processing only allows a small set of mathematical vocabulary, and relies heavily on symbolic
manipulation on the answers. As a result, it will not be able to autograde mathematical proofs, which usually involves reasoning using natural language, such as proof by
induction or proof on graphs. MathBERT was created by fine-tuning BERT, aimed
at dealing with a larger mathematical vocabulary and more complex mathematical
contexts [22]. It achieved over 90% accuracy on autograding tasks, but the problems
used were one-sentence Q&A problems that did not require much reasoning or formula
manipulation, and were much shorter than the full inductive proofs that we treat in
this work. To deal with verifying longer proofs, researchers have also attempted autoformalization [23], translating natural language into formal logic. This should make
checking the correctness of the steps in a proof easier. However, even with several years
of research, none of the existing work is able to translate even half of the proofs to
formal logic [23]. The low success rate makes autoformalization not suited for grading
mathematical proofs at the moment.
To our knowledge, no tools have successfully dealt with grading mathematical
proofs using natural language processing. More emphasis needs to be put on verifying the natural language reasoning part of the proofs in addition to the arithmatic
formulae.
Utilizing ASAG in practice poses a few difficulties. First of all, it is hard for the
ASAG systems to be perfect: mistakes will likely happens during the grading process.
In the context of potentially inaccurate feedback from the graders, helping students
navigate through the feedback can be crucial in achieving the full potentials from
ASAG systems [24]. Another challenge facing the wider use of ASAG is students’ trust
in AI. Previous research around AI trust has shown that people are more likely to
trust human decisions than algorithmic decisions, especially when the tasks are more
subjective [25, 26]. Some previous work has observed that students underestimated
the accuracy of some ASAG systems, believing they were graded incorrectly when
the grades were accurate [27]. The lack of trust in the technology behind the ASAG
systems could hinder the effectiveness of the ASAG systems.
### 3 Model Development
We propose an automated grading process with two steps. First, we run studentwritten proofs through a pretrained large language model to get the embeddings. We
then apply our custom grading model to the embeddings to calculate accuracy on
-----
various rubric points. We also recruited human graders to re-grade proofs in our data
set, which are used for baseline comparison.
**3.1 Data**
The proofs used in this study were collected from the pretest and posttest of a series of
studies designed to measure the learning gains of students using Proof Blocks [28–30].
The study participants were students recruited from a Discrete Mathematics course
at the University of Illinois Urbana-Champaign. The proofs were originally collected
in the form of markdown files, and later graded by members of the research team.
The proofs were graded in a multi-step process which involved multiple members of
the research team grading the proofs independently, and then meeting together to
discuss disagreements. The graders were able to reach an inter-rater reliability rating of
Cronbach’s α at 0.82, 0.88, and 0.92 for each of the three studies respectively [28–30].
These measurements show very high consistency among the agreement between
researchers, and so we use these labels as ground truth for our training and further
analysis. Since the grading of informally written mathematical proofs can be subjective, we recognize that our AI autograders and the human baseline comparison must
not only agree with fundamental truths but also align with the grading standards set
by the research team that created the original labels. However, we note that this is the
same task that an ASAG system or a teaching assistant must accomplish when grading student work for a course—not only verifying underlying truth but also adhering
to the grading criteria set by the instructor, who acts as the ground truth in this context. We also believe that our use of only technical, and not stylistic rubric points,
as well as the rigorous process we used to have multiple researchers agree on a grading scheme (rather than have them decided by a single instructor), make our grading
system as objective as possible in this context.
The proofs were graded using seven rubric points, each identifying a step of the
induction proof:
**R1 Identifying the base case(s)**
**R2 Proving the base case(s)**
**R3 Stating the inductive hypothesis**
**R4 Setting the bound of the inductive hypothesis**
**R5 Stating the goal of the inductive step**
**R6 Breaking down the inductive step**
**R7 Applying the inductive hypothesis**
We selected four different proof by induction problems, and labeled them as P1,
P2, P3, P4. P1 focuses on induction with a recursively defined function, and has 1623
non-empty proofs collected; P2 and P3 are induction proving the closed form solution
to a summation formula, and have 1288 and 342 proofs respectively; P4 proves the
divisibility of a function over the natural numbers, and has 333 proofs. The problems
statements are shown in Table 1. The proofs were initially graded as 0, 1, 2 for each
rubric point. A grade of 0 is given if the required rubric point is not present, 1 if
the rubric point is partially correct, and 2 if it is completely correct. In our work, we
collapse the grades into two categories, correct and incorrect. Original labels of 0 and
1 are combined into 0, standing for incorrect proofs; original labels of 2 are changed
-----
|P1|Suppose that g : N →R is defined by g(0) = 0, g(1) = 34, and g(n) = 4 3 g(n −1) − 31 g(n −2) for n ≥ 2. Use induction to prove that g(n) = 2 − 32 n for any natural number n.|
|---|---|
|P2|Prove the following statement by induction. ∀n ∈N, Pn i = n(n 2+1). i=0|
|P3|Use (strong) induction to prove the following claim: Pn p=0(p · p!) = (n + 1)! −1 for any natural number n.|
|P4|Use (strong) induction to prove the following claim: for any natural number n, 2n3 + 3n2 + n is divisible by 6.|
**Table 1: Problem statements for P1-P4.**
FOr all natural numbers n; let the proposition P(n) be _i=0_ [(from 0 to n) = to]
n(n+1)/2
P(1) is _i=0_ [(sigma from 0 to 1) = to 1(1+1)/2 Left hand side =1; right hand][P][i]
side = 1 So P(1) is true
Therefore let P(x) be, P(x) =[P][i] _i=0_ [(sigma from 0 to x) = x(x+1)/2]
Therefore assume P(x+1) = _i=0_ [(sigma from 0 to x+1) = (x+1)(x+2)/2]
the difference of sums between sigma to x and sigma to x+1 is the term x +1[P][i]
only if we subtract P(x) from P(x+1) the reamaing number sould be x+1
[P][i]
(x+1)(x+2)/2 - x(x+1)/2 = _[x][2]2[+][x]_ 2 = [2][x]2[+2] = x + 1
_−_ _[x][2][+3][x][+2]_
Therefore its proved by mathematical induction
**Fig. 1: An example proof for P2. In the original labeling by previous researchers, this**
proof was labeled correct on rubric points R1: Identifying the base case, R2: Proving
the base case, and R6: Breaking down the inductive step, and incorrect on all other
rubric points.
to 1 to represent fully correct proofs. This allows us to give students individualized
feedback on the correctness of each of the seven rubric points. Future work may use
more complex labels for even more granular feedback. One sample proof and its R1-R7
labels are shown in Figure 1. We have published our data set in a public data repository
so that future researchers may replicate our work and make further improvements in
autograding mathematical proofs [7].
**3.2 Model selection for Embeddings**
Several pretrained large language models are capable of calculating embeddings such
as BERT and its mathematically fine-tuned version MathBERT [22], Llama2 and its
mathematically fine-tuned version Llemma [31], and GPT-3 [32]. Base BERT and
base Llama2 are excluded because MathBERT and Llemma have been observed to
perform better than their corresponding base models when dealing with mathematical contexts [22, 31]. MathBERT is among the best pretrained language models at
mathematical language, and runs on a CPU. The memory requirement is typically less
-----
than 600MB. Because of the low memory requirement, MathBERT allows for cheap
deployment. It is worthwhile trying MathBERT despite having fewer parameters than
most large language models, because if MathBERT can achieve similar performances
as the other large language models, it would also be cheaper to deploy. This can be
a determining factor of the grading models if they need to be scaled in the future to
serve more students. GPT-3 is the best accessible language model at the moment for
embeddings. We use the latest embedding endpoint text-embedding-3-large for analysis, as it outperforms the old endpoint text-embedding-ada-002 by 0.3% in accuracy in
our grading tasks. GPT-3 requires API calls for embeddings, and each API call costs a
small amount of money. This additional cost should be taken into consideration when
scaling this grading model for larger number of students. We did not use GPT-4 as
it does not yet support embeddings, which are preferred to text completion for classification tasks such as ours [32]. Llemma has two versions, Llemma7b with 7 billion
parameters, and Llemma34b with 34 billion parameters. Both can be run locally, but
take a significant amount of RAM and require a GPU even for inference. In our study,
we utilize MathBERT, GPT-3, Llemma34b, and Llemma7b for embeddings and then
compare the performances of the models.
One extra step of data manipulation was performed only for the MathBERT model.
Currently, MathBERT is unable to recognize certain mathematical or mathematical
LaTeX notation such as “f(x)” or “\sum_{i=1}^n”. These symbols would be recognized as individual characters in MathBERT, lowering its ability to fully understand
the contents. We modified the tokens from MathBERT so that it would recognize function calls and some latex expressions in the same token, enhancing the model’s ability
to understand the words and phrases accurately, increasing the accuracy of the model
by a few percent. The modification also made sure that every proof in the dataset fit
within the input token limit. The embeddings are calculated based on the modified
MathBERT tokens.
**3.3 Model fitting**
The downstream task is to grade the proofs using the embeddings as inputs. Because
the LLM we use for embedding handles the difficult task of extracting the relevant
features in the text, we were able to use a relatively simple model for classification
afterwards. We tried both linear regression and SVMs, with similar results, and so we
chose to use linear regression for its simplicity. More complexity can be added if needed
in the future. We trained one grading model for each rubric point of each problem, as
the exact expectations of the rubric points might not be generalizable across different
problems.
The input dimension of the grading model is the same as the returned sentence
embedding dimension of the pretrained model. By default, MathBERT embeddings
have length 768, GPT-3 embeddings have length 1536, and Llemma embeddings have
4096 and 8192 entries respectively for the two versions. The output dimension is set to
2 to match the two distinct categories for grading: “incorrect” (0) and “correct” (1).
The model produces a probability score from the input embeddings, and normalizes
using the softmax function to yield the probability distribution over the two classes.
-----
We will use this probability distribution to test the model and to autograde future
submissions.
The dataset is broken down into training, testing, and validation sets, with 15%
for testing and another 15% for validation. The remaining 70% are used for training.
Performance on the test set will be used to compare the performances of models. The
15% data for validation is reserved for use in future work when the model complexity
is increased.
In training our grading model, we chose a batch size of 128 as it is a common choice
for training networks. Number of epochs for training range from 100 to 1000, with a
step size of 100 epochs. Initially, the learning rate is set to 0 and linearly increased to
a peak value of 0.001 over the first 60% of the training epochs. This linear increase
provides a warm-up period for the model to gradually adapt to the task. Subsequently,
during the second half of the training, the learning rate is exponentially decreased
to 101 [of the peak value, allowing the model to fine-tune its parameters and converge]
towards a stable solution.
To save computational resources, we generate all embeddings by running the pretrained models once on the entire proof dataset. Then we save the mapping between
each proof and its corresponding embedding to a pickle file. The two Llemma models
were run on our University’s supercomputing cluster, using eight NVIDIA A100 GPUs
simultaneously, taking about one second on average to turn one proof into embeddings; GPT-3 embeddings were retrieved from an OpenAI API call; MathBERT was
run on a single NVIDIA RTX4090 GPU. The customized grading models are built
and trained using PyTorch [33] on the same NVIDIA GPU. For each model, trainings
completed at a rate of about 2 seconds per 100 training epochs and 10 batchs.
**3.4 Comparison to human graders**
Theoretically, it is hard for machine learning models to perfectly grade the freeform
mathematical proofs. However, it can also be hard for humans to agree on the grading
of mathematical proofs both due to ambiguity in language and honest mistakes. In
the case that there is any subjectivity in the grading, human graders also need to
match the specification set by the course instructor. To our knowledge, no prior study
exists that quantifies the accuracy of humans grading mathematical proofs to match
an instructor specification. We are also interested in how well human graders can
match the grading specification compared with our NLP grading models.
Using the last author’s professional network, we recruited graduate students from
several universities who had been teaching assistants in discrete mathematics courses.
These graduate students all had experience grading mathematical induction proofs.
9 graduate students from 4 different institutions participated in the grading tasks.
For each human grader, we sent 15 student proofs from each of the 4 proof problems
(for a total of 60 problems each), and asked them to grade these proofs using the
same rubric. Along with the proofs to grade, we also provided 15 examples of graded
proofs for each of the problems, the original problem statements, and detailed notes
with explanations of the rubrics created by previous researchers. This combination
of example labeled proofs and detailed rubric explanations were designed to set the
graders up to be as successful as possible on their grading task. In fact, we provided
-----
the graders with more training materials for grading than many instructors do for
grading for actual courses (in our experience). The graders were also free to ask us any
questions regarding problems encountered during the grading process, as they would
be when grading student work for an actual course. Each grader received a $100 gift
card as compensation for their grading efforts. This human subjects data collection
procedure was approved by the institutional review board at Utah State University.
### 4 Model Performance Results
In this section, we will provide an overview on the performance of the grading models,
the accuracy of the human graders, and an analysis of how much training data we
need to get satisfactory performance.
**4.1 Grading Model Performance**
For each model and each training epoch, we calculate the performance using the validation set, comparing the model predictions to ground truth labels that previous
researchers created. We compute the confusion matrix to obtain the number of true
positive, true negative, false positive, false negative cases, and to calculate the accuracies and F1 scores. Among the models using different training epochs, we select the
one with the highest accuracy. Model results for each problem and each pretrained
model are summarized by averaging the accuracies from the testing set for all rubric
points. The results are shown in Table 2.
All four pretrained models achieve over 80% average accuracy. MathBERT is the
lowest performing one with an average of 84.1% accuracy across all problems, as it is
the model with the fewest parameters, thus capturing fewer details accurately. The
best grading models are Llemma-based, both Llemma34b and Llemma7b, with 90.0%
and 90.2% average accuracy, even beating GPT-3-based models. This is likely because
Llemma is more specifically trained to handle mathematical language. Moreover, the
large number of parameters gives Llemma the ability to capture almost the same
amount of information as GPT-3. The accuracies and F1 scores of Llemma34b graders
on a by-rubric-point basis appears in Table 3.
Based on these results, there seems to be no pattern of which rubric points are more
easily gradable than others. Currently, Llemma and Llemma-based grading models
perform well enough for use by students as practice for writing mathematical proofs.
In future work, we want to investigate how to provide more accurate and detailed
feedback to students, as the correctness of the models is not guaranteed.
**4.2 Comparison to human graders**
After the human graders finish the grading process, we calculate the accuracies and F1
scores of human grader results and compare them with those of the grading models.
The overall performance is also shown in Table 2, with the detailed accuracy and
F1 score for each rubric point shown in Table 4. On average, human graders achieve
86.6% agreement with the research team labels, which is higher than MathBERT but
-----
Problem Data Size Grading Method Accuracy
MathBERT & linear 83.6%
GPT-3 & linear 87.9%
Llemma7b & linear 90.7%
Llemma34b & linear 90.2%
Human Graders 84.1%
MathBERT&linear 83.7%
GPT-3 & linear 88.3%
Llemma7b & linear 90.9%
Llemma34b & linear 90.7%
Human Graders 91.3%
MathBERT&linear 85.7%
GPT-3 & linear 89.2%
Llemma7b & linear 85.7%
Llemma34b & linear 86.6%
Human Graders 85.1%
MathBERT&linear 86.3%
GPT-3 & linear 85.4%
Llemma7b & linear 89.9%
Llemma34b & linear 90.2%
Human Graders 85.8%
P1 1623
P2 1288
P3 342
P4 333
**Table 2: Overall grading accuracy for grading mod-**
els and human graders. Llemma-based grading models have the highest accuracy on average, and slightly
higher than human graders.
lower than GPT-3. The comparison provides a solid ground for further developing and
utilizing our grading models in real courses.
Accuracies for all grading models and individual human graders are plotted in
Figure 2. Human graders are separated into two groups based on whether they have
been a grader at the same university where the data was collected. Group A graders
are from the same university that the proof data was originally collected at, and
thus have more experience using the same rubric that was used for the study. Group
B graders are from other universities, but still have experience in grading proof by
induction problems (perhaps with different rubrics). The figure shows that in general,
group A graders are more accurate than group B graders, and are also more accurate
than the grading models. We conclude that our grading models are able to regularly
outperform minimally trained human graders at matching a grading specification, but
human graders with more extensive training will be able to outperform our grading
models. This difference is similar to the comparison between graders with different
amounts of training in a prior work [34].
10
-----
|Col1|R1|Col3|R2|Col5|R3|Col7|R4|Col9|R5|Col11|R6|Col13|R7|Col15|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||A|F1|A|F1|A|F1|A|F1|A|F1|A|F1|A|F1|
|P1|.888|.863|.862|.848|.892|.834|.897|.810|.909|.855|.931|.886|.935|.876|
|P2|.935|.940|.929|.933|.891|.855|.875|.816|.875|.839|.918|.885|.924|.889|
|P3|.878|.923|.816|.870|.878|.914|.939|.954|.694|.706|.918|.846|.939|.880|
|P4|.979|.988|.938|.962|.917|.926|.896|.906|.812|.800|.896|.783|.875|.750|
**Table 3: Llemma34b model grading performances for all four problems (P1-P4) and**
all seven rubric points (R1-R7). Column label A stands for accuracy; column label
F1 stands for F1 scores.
|Col1|R1|Col3|R2|Col5|R3|Col7|R4|Col9|R5|Col11|R6|Col13|R7|Col15|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||A|F1|A|F1|A|F1|A|F1|A|F1|A|F1|A|F1|
|P1|.830|.810|.741|.745|.852|.815|.859|.808|.830|.763|.896|.860|.881|.800|
|P2|.970|.970|.933|.926|.881|.846|.881|.818|.904|.876|.889|.839|.933|.903|
|P3|.881|.919|.807|.852|.844|.871|.822|.838|.793|.741|.896|.767|.911|.786|
|P4|.881|.921|.874|.912|.859|.882|.815|.834|.807|.750|.867|.640|.904|.698|
**Table 4: Human grading accuracy and F1 scores, broken down by rubric point. The**
data size is different from the model accuracy testing, as each human grader graded
60 proofs in total.
**4.3 How large of a dataset do we need?**
We are also interested in learning how much data is needed to achieve an acceptable
performance. We did another set of training and testing to compare the effect of
training data size on model performance. To ensure consistency, the same 30% of
the data is used as the testing set. The training set sizes range from 50 to the max
available size, with a step size of 50 (before size 200) or 100 (after size 200). The model
performance on each training data size is shown in Figure 3.
For all four models, the performance is still increasing even with 1100 training data
points, so there is space for improvement by collecting more data. Model performance
might also be better by making the grading models more complex, such as adding
extra layers. However, most of the improvement seems to be complete before the 400
training size threshold. For Llemma34b, the performance at 400 training data already
achieves 98.8% of the full data size accuracy. Based on these results, we conclude that
training a model with 400 proofs is sufficient for adequate results.
### 5 User Study Methods
Using the trained grading pipelines, we implemented an autograder for proof by
induction problems using the online assessment system PrairieLearn [35]. Due to computational resource restrictions and the fact that GPT-3 performed nearly as well as
Llemma on the 3 proofs we used for the user study, we chose to use GPT-3 as the
embedding model for our user study. For each submitted student proof on a problem,
the grading pipeline reports seven correct/incorrect results using the rubrics R1-R7
introduced in 3.1. The final score for the proof is the sum of the 7 rubric point results,
ranged between 0 and 7, and the rescaled to 0-100. The grader then provides feedback
11
-----
A_3
A_1
A_2
B_6
Llemma7b
Llemma34b
B_3
Grader
GPT3
B_1
B_5
B_4
MathBERT
B_2
0.4 0.5 0.6 0.7 0.8 0.9 1.0
Accuracy
**Fig. 2: Accuracy for each grading model and human grader. A i are the graders from**
the same university and are more familiar with the used grading rubrics, and B i are
from other universities. Graders with more experience with the grading rubrics achieve
higher accuracy than unexperienced graders and all grading models.
on the rubric point results based on various strategies we defined. Figure 4 shows an
example page from PrairieLearn where students can submit their proofs and receive
automatic feedback.
**5.1 Study Setup**
To investigate the accuracy of our grading models in a real class and better understand
if they can help students write better proofs without the support of human graders,
we conducted a study in Spring 2024 with students registered in the Discrete Mathematics course (CS173) at the University of Illinois Urbana-Champaign. We address
the following research questions:
_RQ1 Are students able to write better proofs by interacting with the autograder and_
the feedback it generates?
_RQ2 Are students satisfied with the autograder and the feedback it provides?_
_RQ3 Does using the autograder make students more willing to use similar AI_
autograders in the future?
The study was conducted earlier in the semester before students had been introduced to systematic instructions on proof by induction or had practiced any induction
problems at the college level. As part of the study, students were asked to complete
a learning activity and were offered extra credit points for its completion, equivalent
to the completion of one homework (about 0.36% of the course grade). The students
had one week to complete the learning activity at a time of their choice.
The learning activity consisted of three parts. In the first part, students were
expected to read a 6-page textbook section about induction, including information on
12
-----
0.90
0.85
0.80
0.75
0.70
0.65
0.90
0.85
0.80
0.75
0.70
0.65
0.90
0.85
0.80
0.75
0.70
0.65
0.90
0.85
0.80
0.75
0.70
0.65
200 400 600 800 1000
P1
P2
P3
P4
(a) MathBERT
200 400 600 800 1000
P1
P2
P3
P4
(c) Llemma7b
200 400 600 800 1000
P1
P2
P3
P4
(b) GPT-3
200 400 600 800 1000
P1
P2
P3
P4
(d) Llemma34b
**Fig. 3: Training Size vs. Accuracy. With 400 training data, all grading models are**
achieving near convergent accuracy, and this accuracy is about 1.2% away from the
maximum accuracy for the grading models.
the theoretical ground for induction, the steps of an induction proof, and an example problem with solution. The second part asked students to complete 3 induction
problems (P1-P3 as defined in 3.1). We only consider 3 problems out of the 4 used
in the model training process for two reasons: first, we wanted to limit the length of
the learning activity to 1 hour, which would not be reasonable if we expected students to read the text and complete all 4 problems; second, we removed P4 due to its
low accuracy compared to P1-P3, as this could potentially hinder the students’ learning experiences. The third part of the learning activity was a feedback survey. The
questions are listed below:
**S01 In CS173, I usually receive accurate grading from human graders for my proofs.**
**S02 Feedback from human graders has helped me improve my proofs.**
**S03 Overall, I am satisfied with my experience with human graders in CS173.**
**S04 I would prefer to wait for a week to get human grading feedback for the proofs I**
write for my homework than use the autograder for instant feedback.
**S05 Even if a well-developed AI autograder has about the same accuracy as human**
graders, I still trust the grading results from human graders more.
13
-----
(a) Question prompt and submission panel
(b) Graded results and feedback from the autograder
**Fig. 4: An example of an autograded problem in PrairieLearn. Students type in their**
proof in the markdown editor box, and are able to see the markdown preview in real
time. Once they click “Save & Grade”, they receive immediate feedback form the
autograder, indicating if the submission is correct or not.
14
-----
**S06 I am comfortable using a well-developed AI autograder in my course to give me**
feedback as I prepare for my quizzes.
**S07 Given that I can still regrade my work with human graders, I am comfortable**
having a well-developed AI autograder in my course to grade my quizzes.
**S08 I received accurate grading from the autograder.**
**S09 Feedback from the autograder helped me improve my proofs.**
**S10 Overall, I am satisfied with my experience of using the autograder.**
The first subsection of the survey includes questions S01-S03, which asks the students about their prior experience with human graders in the course. The second
subsection includes questions S04-S07, focusing on students’ perceptions of using AI
for autograding. The last questions ask students to share their experiences with the
AI autograder during the learning actvity. The survey questions are presented on a
five-point Likert scale using the following options and corresponding numeric values:
“Strongly disagree” (-2), “Disagree” (1), “Neutral” (0), “Agree” (1), “Strongly agree”
(2). Negatively worded questions (e.g., “I would prefer to wait for a week to get human
grading feedback for the proofs I write for my homework than use the autograder for
instant feedback”) are reverse coded in the statistical analysis.
To help us get a more full picture of student perceptions about the graders, we
also asked the following open-ended survey questions:
**S11 In your opinion, how can the autograder be useful in your learning experiences?**
**S12 Can you share a positive experience you had with the autograder?**
**S13 Can you share a negative experience you had with the autograder?**
**S14 Do you have other comments on the autograder?**
We do not use S11-S14 to directly answer any of the research questions, but we will
use them in the discussion as we seek to understand the student’s reasoning behind
why they answered the way they did on the Likert items.
**5.2 Experimental Conditions**
The students were randomly assigned to 3 groups. We refer to the groups as Self_eval, First, and Random. The Self-eval group was the control group. In the Self-eval_
group, students did not have access to the AI autograders during the learning activity.
Instead, upon proof submission, students in this group could see the 7 rubric points
for the problem and were asked to self-assess and make improvements to their proof
until they were satisfied with the solution. This is our baseline group, as it mimics a
student completing a proof on their own.
Both the First and Random groups had access to the AI autograders, forming our
treatment groups. Students in the First group received feedback on the first rubric
point out of the seven that the autograder identified them having done incorrectly. The
order is defined by the sequence of rubrics R1-R7, which corresponds to the logical
flow of a proof. For an induction proof, it is common to state and prove the base cases
first, then state the goal for the induction steps before actually completing the steps.
The strategy of reporting the first incorrect rubric point is based on some students’
preference to work incrementally, fixing the earlier parts of their work before moving
15
-----
on to the next part. When we designed the experiment, we expected this strategy to
be the most helpful to students.
Students in the Random group received feedback for one of rubric points, randomly
selected from all those that the autograder assessed as incorrect. This group is included
in our study to test if different strategies for reporting have different effects.
### 6 User Study Results
After the learning activity deadline, we manually screened all students’ submissions
and excluded students who showed no effort during the study. These excluded students
either submitted blank proofs to all 3 problems or only typed something trivial such
as “hello” or a question mark in their proofs. After applying this exclusion criteria, we
had 68 students in the Self-eval group, 59 students in the First group, and 42 students
in the Random group. Due to constraints imposed by the online assessment platform
PrairieLearn, it was easier to pre-assign all eligible students to one of the experimental
conditions before they elected to participate. Students agreed to participate upon
opening the assessment in PrairieLearn. These two exclusion criteria combined resulted
in the small variation in the population of sizes for each treatment. This small variation
has no effect on the conclusions of the study because the assignment of students to
experimental groups was still random.
**6.1 Proof Problem Performance**
For each student, we collected the number of submissions for each problem and the
score for each submission. We are interested in investigating the difference between
students’ initial submissions and best submissions for all three problems. Figure 5
shows the distribution of the initial scores and best scores for each student group, and
figure 6 shows the number of submissions for all the proof problems combined.
For each of the three proof problems (P1-P3), we performed a Kruskal-Wallis
test across all groups to find the difference between students’ earned scores on their
initial submission. The results are p = 0.26, 0.14, 0.27 respectively. These indicate that
students are starting their proof-writing with similar quality across all groups, which
makes sense as the students are randomly assigned to the three groups.
On the other hand, a Kruskal-Wallis test on students’ earned scores on their best
submission shows statistically significant differences for all three problems, as shown in
Table 5. Post-hoc pairwise tests reveal that the best submission scores are significantly
higher for students in First and Random than Self-eval, while there is no significant
difference between First and Random.
To better understand the effect of using the AI autograder to support the writing
of proofs while controlling for initial student knowledge, we propose a regression model
to determine the score gain for students in the three groups, using the students’ initial
and best submission scores. We fit an ordinary least squares (OLS) model of the form
BESTij = µj + α Iij + β1G1i + β2G2i (1)
16
-----
Initial Best
100
80
60
Score
40
20
0
Self-eval Random First
Group
**Fig. 5: Distribution of initial and best scores in all problems, separated by the group.**
Distributions of initial and best submissions for the Self-eval group are similar, while
_Random and First have different distributions._
Mean(SD)
Problem Self-eval Random First _H_ _p_
P1 75.8(38.8) 90.1(24.1) 92.1(22.9) 10.95 0.004
P2 60.1(32.4) 79.1(24.6) 76.8(25.6) 13.62 0.001
P3 51.1(40.4) 68.8(37.8) 75.9(34.6) 13.68 0.001
**Table 5: Kruskal-Wallis, mean, and standard deviation**
for the best submission scores of each problem. Students
in Random and First have a significantly higher score in
their best submissions.
where BESTij is the predicted best submission score for student i on problem j, and
Iij is the initial score for student i on problem j. Both scores range between 0 and 100.
_G1i and G2i are indicators of the students being in Random or First, respectively._
For any student in Self-eval, both values are 0; for students in Random G1i is 1 and
_G2i is 0, and for students in First G1i is 0 and G2i is 1. We want to estimate the_
parameters µj, α, β1, and β2, which can be interpreted as follows:
- µj: Control for the difficulty of problem j
- α: Control for the initial score of the proof
- β1: The effect of feedback for students in the Random group
17
-----
Self-eval
100
50
0
Random
100
50
0
First
100
50
0
0 5 10 15 20 25 30
Number of submissions
**Fig. 6: Histogram for the number of submissions per student for each of the three**
groups. Students in the Self-eval group have slightly fewer submissions for all proof
problems combined.
- β2: The effect of feedback for students in the First group
Table 6 summarizes the results from the regression analysis using Equation 1. The
baseline students are students in Self-eval, who do not receive the results or feedback
from the autograder. Compared with Self-eval students, students in the First and
_Random groups are able to gain respectively 11.6 and 11.3 more points from the proofs_
they write after controlling for their initial submission performances. This shows that
students can improve and write more accurate proofs using the feedback from the
autograder compared with students who are just self-evaluating using the rubric. In
future work, we will seek to also measure knowledge retention by having students
complete additional proofs without the autograder in a delayed posttest.
**6.2 Survey Results**
The survey results are shown in Figure 7 and Figure 8. The three subsections of
the surveys address different aspect of student perceptions: S01-S03 address student
perceptions of human graders, S04-S07 address students willingness to have AI graders
integrated into various parts of their course, and S08-S10 address student perceptions
of the AI graders. Wee calculated a Cronbach’s α for each of the three subsections.
The calculated Cronbach’s α are 0.81, 0.72, 0.82 respectively, indicating good internal
18
-----
Coefficient Description Value _stderr_ _t_ _p_
_α_ Control for initial score 0.693 0.03 27.9 _< 0.001_
_β1_ Effect of the Random group 11.3 2.32 4.86 _< 0.001_
_β2_ Effect of the First group 11.6 2.10 5.51 _< 0.001_
_µ1_ Control for difficulty of P1 26.5 2.56 10.36 _< 0.001_
_µ2_ Control for difficulty of P2 20.2 2.37 8.52 _< 0.001_
_µ3_ Control for difficulty of P3 20.0 2.25 8.92 _< 0.001_
**Table 6: Coefficients from Equation 1. The R[2]** metric of this regression
is 0.665, indicating a strong relationship between variables. Students in
the experimental groups using the AI autograder are able to achieve
11 more points in the proofs than students using the rubrics for selfevaluation.
reliability. The Self-eval group were not asked questions S08-S14 as these students did
not interact with the autograder during the learning activity.
**6.2.1 RQ2: Are students satisfied with the autograder and**
**feedback?**
We use responses to survey questions S01-S03 and S08-S10, as shown in Table 7, to
address RQ2. S01-S03 capture students’ perceptions of human graders, while S08-S10
assess their perceptions of the AI autograder. Specifically, S01 and S08 measure the
perceived accuracy of the graders, S02 and S09 evaluate their helpfulness, and S03
and S10 measure overall satisfaction with the graders. On average, human graders are
perceived to be more accurate and helpful than the AI autograder, with statistically
significant differences (p < 0.05). However, although the mean satisfaction rating is
higher for human graders, the differences are not statistically significant for either the
First or Random groups. We hypothesize that this is due to the known benefits of the
AI autograder, such as timely feedback and the ability to submit an unlimited number
of times.
Accuracy Helpfulness Satisfaction
Group S01 S08 _t_ _p_ S02 S09 _t_ _p_ S03 S10 _t_ _p_
Self-eval 0.84 — — — 0.76 — — — 0.74 — — —
First 1.14 0.41 4.37 _< 0.001_ 0.86 0.22 3.61 _< 0.001_ 0.75 0.44 1.69 0.094
Random 1.02 0.48 2.81 0.006 0.69 0.24 2.14 0.035 0.74 0.52 0.97 0.336
**Table 7: S01-S03 and S08-S10 statistics for three groups, and the pairwise t-test for**
independence. Self-eval students did not have access to S08-S10 as these students did
not interact with the autograder during the learning activity. All scores range between
-2 and 2.
**6.2.2 RQ3: Does using the autograder make students more willing**
**to use AI autograders?**
We want to understand students’ perception and trust on using an autograder for their
coursework during various settings. We want to know whether interactions with the
19
-----
First
Random
Self−eval
First
Random
Self−eval
|S01: In CS173, I usually receive accurate grading from human graders for my proofs.|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|0%||||14|%||||86%|
|2%||||19|%||||79%|
|13%||||13|%||||74%|
|0% 2% 13% 86% 79% 74% 14% 19% 13%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|S02: Feedback from human graders has helped me improve my proofs.|||||||||||
||||||||||||
|8%||||19||%||||73%|
|7%||||31||%||||62%|
|10%|||||22|%||||68%|
First
Random
Self−eval
First
Random
Self−eval
|8% 7% 10% 73% 62% 68% 19% 31% 22%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|S03: Overall, I am satisfied with my experience with human graders in CS173.|||||||||||
||||||||||||
|12%||||17||%||||71%|
|7%||||19||%||||74%|
|10%||||22||%||||68%|
|12% 7% 10% 71% 74% 68% 17% 19% 22%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|S04: I would prefer to wait for a week to get human grading feedback for the proofs I write for my homework than use the autograder for instant feedback.||||||||||
|||||||||||
|42%||||25|%||||32%|
|31%||||31|%||||38%|
|40%||||24|%||||37%|
First
Random
Self−eval
First
Random
Self−eval
|42% 31% 40% 32% 38% 37% 25% 31% 24%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|S05: Even if a well−developed AI autograder has about the same accuracy as human graders, I still trust the grading results from human graders more.||||||||||
|||||||||||
|32%||||19|%||||49%|
|26%||||21|%||||52%|
|16%||||37|%||||47%|
|32% 26% 16% 49% 52% 47% 19% 21% 37%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|S06: I am comfortable using a well−developed AI autograder in my course to give me feedback as I prepare for my quizzes.||||||||||
|||||||||||
|8%||||15|%||||76%|
|12%||||10|%||||79%|
|13%||||18|%||||69%|
First
Random
Self−eval
|8% 12% 13% 76% 79% 69% 15% 10% 18%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|S07: Given that I can still regrade my work with human graders, I am comfortable having a well−developed AI autograder in my course to grade my quizzes.||||||||||
|||||||||||
|24%||||10|%||||66%|
|14%||||7|%||||79%|
|15%||||26|%||||59%|
100 50 0 50 100
Percentage
Response Strongly disagree Disagree Neutral Agree Strongly agree
**Fig. 7: Responses from survey questions given to all 3 groups. The percentages**
shown on each distribution represent the percentage of negative responses (disagree &
strongly disagree), the percentage of netural responses, and the percentage of positive
responses (agree & strongly agree).
autograder can lead to more positive perceptions. These are reflected through S04-S07
(see Table 8). S04 examines students’ preferences between receiving instant feedback
from AI autograders and obtaining more reliable feedback from human graders. S05
assesses the level of trust students have in AI autograders in general. S06 and S07
explore potential applications of AI autograders.
On average, students in First and Random are slightly more comfortable using
a well-developed autograding system in the future. However, one-way ANOVA tests
among the groups show the differences are not statistically significant.
20
-----
First
Random
First
Random
|S08: I received accurate grading from the autograder.|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|20%||||27|%||||53%|
|21%||||19|%||||60%|
|20% 21% 53% 60% 27% 19%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|S09: Feedback from the autograder helped me improve my proofs.||||||||||
|||||||||||
|22%||||34|%||||44%|
|31%||||19|%||||50%|
First
Random
|22% 31% 44% 50% 34% 19%|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|S10: Overall, I am satisfied with my experience of using the autograder.||||||||||
|||||||||||
|17%||||31|%||||53%|
|19%||||26|%||||55%|
100 50 0 50 100
Percentage
Response Strongly disagree Disagree Neutral Agree Strongly agree
**Fig. 8: Responses from survey questions given to only First and Random. The per-**
centages shown on each distribution represent the percentage of negative responses
(disagree & strongly disagree), the percentage of netural responses, and the percentage of positive responses (agree & strongly agree).
Survey Self-eval Random First _F_ _p_
S04 -0.06 -0.29 0.10 1.20 0.31
S05 -0.51 -0.36 -0.22 1.06 0.35
S06 0.72 0.95 0.92 1.01 0.37
S07 0.63 0.98 0.76 1.17 0.31
**Table 8: S04-S07 mean numeric scores about**
perceptions using AI autograders, with one-way
ANOVA test statistics. All scores range between
-2 and 2.
### 7 Discussion
**7.1 Model Accuracy**
In terms of student-perceived accuracy, the results mostly agree with the accuracy
measures in 4.2. Although GPT-3 models are observed to perform relatively well, they
are still less accurate than experienced human graders. This difference in accuracy
can be amplified even more in practice. Repetition of similar proofs might skew the
accuracy measures because similar proofs tend to be graded the same way due to
the stable nature of the large language models. Students can potentially encounter
multiple incorrect gradings in a row by submitting similar proofs, which increases their
sense of mistrust on the graders.
Currently, the large language models are used as black boxes for embedding. There
are cases where the models do not precisely interpret the input sentences, but it is
hard to predict why, when, or how the misinterpretation happens. This can lead to
the autograder giving wrong grades in mysterious ways. This has happened to some
of the users. Some open-ended feedback from the users are listed below:
- “Additionally, when I added ‘Suppose that’ in front of my Inductive Hypothesis,
it deemed it incorrect.”
21
-----
- “It was not always accurate especiall [sic] when I didnt us LateX perfectly.”
- “My grade also improved by changing some, but not all, of my ‘\times’ to ‘\cdot’
for no apparent reason, either. When I changed all of them, my grade remained
unchanged.”
- “Spacing my sentences out into separate paragraphs improved my grade by 15%,
also for no apparent reason.”
One possible improvement for this situation is to break down a proof problem into
sub-problems, where students submit portions of the proof into different input boxes.
This can help decrease the complexity of the proofs being submitted and provide more
accurate embeddings. This also makes the task slightly different due to the added
scaffolding, which may be appropriate for formative settings, but less so for exams.
**7.2 Feedback Helpfulness**
Despite our findings showing that students are able to make improvements using the
feedback from the autograder, some students felt that the feedback was not detailed
enough. The outputs from the grading models are always booleans representing correctness: they can tell if a student proof is incorrect, but not why the proof is incorrect.
This form of feedback roughly aligns with knowledge of results defined by Shute [6],
which was shown to be less effective than elaborated feedback in several studies [36].
The lack of useful feedback that can help students improve their proofs is also reported.
- “Some of the feedback was basic. didn’t say anything specific. ”
- “I also think that the autograder was not specific enough to identify what exactly
was wrong with my proofs. I was stuck on the 2nd problem for a while, and the
feedback that I was getting was not exactly enough to pinpoint what exactly was
wrong.”
Providing such automated and elaborated feedback should be possible with the help
of generative models. Using few-shot training techniques such as Retrieval-Augmented
Generation [37], we can fine-tune chatbots such as ChatGPT to provide suggestions
or hints that can help students make improvements and are better aligned with the
learning requirements for individual courses. This enhancement to our system can be
treated in future work.
On the other hand, some students acknowledged that having the instant feedback
was already very helpful in their proof-writing process.
- “The autograder tells me where exactly I am lacking information, which helps
me more because I can pinpoint where I need to work on.”
- “The autograder allows for quick and efficient feedback of on proofs versus waiting
to hear back days after on issues with ones proof. This quick turn around time
allows for a easy/fast way to get better at proof writing.”
- “I got absolutely stuck with the last proof! However, when I used the autograder,
it gave me around a 75%. This gave me confidence that I was on the right track,
and I was able to not get discouraged. I was slowly able to incorporate more parts
of the solution into my proof”
22
-----
**7.3 Perceptions on AI grading**
Proof grading is a subjective task, which can lead to low levels of trust from students.
In our initial designing of the learning activities, we had hoped that students could
have more trust or willingness in using AI autograders in the future as a result of
interacting with our grader. However, results in 6.2 do not confirm our initial hypothesis. Students in First and Random are slightly more willing to use an AI autograder
in various settings, but the difference is not statistically significant. Simply using the
autograder does not sufficiently increase students’ willingness to use AI autograders
in general. One possible action to enhance trust in AI autograding is to provide more
feedback and explanation during the grading process, as Ha and Kim suggests that providing additional information about AI models can be effective in mitigating cognitive
bias [38].
### 8 Conclusions and Future Work
In this study, we have developed a set of machine learning models to autograde freeform
mathematical proofs. These models allow for fast training, and they achieve similar
accuracy in autograding freeform mathematical proofs compared with most human
graders recruited in the study. These models can be used to develop NLP graders and
deployed on large-scale educational platforms in the future, enabling more students to
receive the necessary support when learning to write mathematical proofs.
After creating and deploying autograded proof by induction problems using the
grading models, we conducted a user study to find the impact of the autograder. Our
results show that students who use the autograder for getting feedback and making
improvements were able to score more than 10% higher on the proof problems than
students who did not use the autograder. This is an encouraging sign for creating more
autograded problems in the future. Despite being helped by the autograder, many
students still preferred feedback from human graders, even if it was delayed.
In the future, it can be helpful to put effort in enabling more student trust in
AI autograders. Efforts can be put on improving the grading models as well. As we
only trained the grading models on mathematical induction problems, the abilities for
similar models to grade other types of proof such as proof of bijection or geometry is
still unknown. We will start to collect proof data for various types of mathematical
proof problems, and check if our designed grading models can perform as well on
these problems. The cost of training and running the models can also be lowered, as
currently the best performing models require good GPUs to run, and the CPU-based
models do not achieve performances that are as good.
**Acknowledgments**
This work used the Delta system at the National Center for Supercomputing
Applications through allocation CIS230355 from the Advanced Cyberinfrastructure
Coordination Ecosystem: Services & Support (ACCESS) program, which is supported
by National Science Foundation grants #2138259, #2138286, #2138307, #2137603,
and #2138296.
23
-----
The work is also made possible by members of previous research projects collecting
and labeling the proof data, and graders from multiple institutes providing grading
data for comparison.
### References
[1] Computing Curricula, A.f.C.M.A., Society, I.C.: Computer Science Curricula
2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer
Science. Association for Computing Machinery, New York, NY, USA (2013)
[2] Computing Curricula, A.f.C.M.A., Society, I.C.: Curriculum guidelines for undergraduate degree programs in computer engineering. Technical report, New York,
NY, USA (2016)
[3] Computing Curricula, T.J.T.F.: Curriculum guidelines for undergraduate degree
programs in software engineering. Technical report, New York, NY, USA (2014)
[4] Goldman, K., Gross, P., Heeren, C., Herman, G., Kaczmarczyk, L., Loui, M.C.,
Zilles, C.: Identifying important and difficult concepts in introductory computing courses using a delphi process. In: Proceedings of the 39th SIGCSE
Technical Symposium on Computer Science Education. SIGCSE ’08, pp. 256–
[260. Association for Computing Machinery, New York, NY, USA (2008). https:](https://doi.org/10.1145/1352135.1352226)
[//doi.org/10.1145/1352135.1352226 . https://dl.acm.org/doi/10.1145/1352135.](https://doi.org/10.1145/1352135.1352226)
[1352226 Accessed 2023-12-05](https://dl.acm.org/doi/10.1145/1352135.1352226)
[5] Anderson, J.R., Corbett, A.T., Koedinger, K.R., Pelletier, R.: Cognitive tutors:
Lessons learned. The journal of the learning sciences 4(2), 167–207 (1995)
[6] Shute, V.J.: Focus on formative feedback. Review of Educational
Research **78(1),** 153–189 (2008) [https://doi.org/10.3102/0034654307313795](https://doi.org/10.3102/0034654307313795)
[https://doi.org/10.3102/0034654307313795](https://arxiv.org/abs/https://doi.org/10.3102/0034654307313795)
[[7] Poulsen, S.: Student Proof by Induction Data Set. https://doi.org/10.7910/DVN/](https://doi.org/10.7910/DVN/OTRLXF)
[OTRLXF . https://doi.org/10.7910/DVN/OTRLXF](https://doi.org/10.7910/DVN/OTRLXF)
[8] Stylianides, G., Stylianides, A., Weber, K.: Research on the teaching and learning
of proof: Taking stock and moving forward. In: Cai, J. (ed.) Compendium for
Research in Mathematics Education, pp. 237–266. National Council of Teachers
of Mathematics, Reston, VA (2017). Chap. 10
[9] Breitner, J.: Visual Theorem Proving with the Incredible Proof Machine, pp.
[123–139 (2016). https://doi.org/10.1007/978-3-319-43144-4 8](https://doi.org/10.1007/978-3-319-43144-4_8)
[10] Lerner, S., Foster, S.R., Griswold, W.G.: Polymorphic blocks: Formalism-inspired
ui for structured connectors. In: Proceedings of the 33rd Annual ACM Conference
on Human Factors in Computing Systems, pp. 3063–3072 (2015)
24
-----
[11] Poulsen, S., Viswanathan, M., Herman, G.L., West, M.: Proof Blocks: Autogradable Scaffolding Activities for Learning to Write Proofs. In: Proceedings of
the 27th ACM Conference on Innovation and Technology in Computer Science
[Education Vol. 1, pp. 428–434 (2022). https://doi.org/10.1145/3502718.3524774](https://doi.org/10.1145/3502718.3524774)
[. arXiv:2106.11032 [cs]. http://arxiv.org/abs/2106.11032 Accessed 2023-09-09](http://arxiv.org/abs/2106.11032)
[12] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N.,
Kaiser, L., Polosukhin, I.: Attention Is All You Need. arXiv. arXiv:1706.03762 [cs]
[(2023). https://doi.org/10.48550/arXiv.1706.03762 . http://arxiv.org/abs/1706.](https://doi.org/10.48550/arXiv.1706.03762)
[03762 Accessed 2023-11-26](http://arxiv.org/abs/1706.03762)
[13] Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P.,
Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss,
A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D.M., Wu, J.,
Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B.,
Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., Amodei, D.:
Language Models are Few-Shot Learners. arXiv. arXiv:2005.14165 [cs] (2020).
[http://arxiv.org/abs/2005.14165 Accessed 2024-01-10](http://arxiv.org/abs/2005.14165)
[14] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: Pre-training of Deep
Bidirectional Transformers for Language Understanding. arXiv. arXiv:1810.04805
[[cs] (2019). http://arxiv.org/abs/1810.04805 Accessed 2023-09-09](http://arxiv.org/abs/1810.04805)
[15] Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., Bikel, D., Blecher, L., Ferrer,
C.C., Chen, M., Cucurull, G., Esiobu, D., Fernandes, J., Fu, J., Fu, W., Fuller,
B., Gao, C., Goswami, V., Goyal, N., Hartshorn, A., Hosseini, S., Hou, R., Inan,
H., Kardas, M., Kerkez, V., Khabsa, M., Kloumann, I., Korenev, A., Koura,
P.S., Lachaux, M.-A., Lavril, T., Lee, J., Liskovich, D., Lu, Y., Mao, Y., Martinet, X., Mihaylov, T., Mishra, P., Molybog, I., Nie, Y., Poulton, A., Reizenstein,
J., Rungta, R., Saladi, K., Schelten, A., Silva, R., Smith, E.M., Subramanian,
R., Tan, X.E., Tang, B., Taylor, R., Williams, A., Kuan, J.X., Xu, P., Yan, Z.,
Zarov, I., Zhang, Y., Fan, A., Kambadur, M., Narang, S., Rodriguez, A., Stojnic,
R., Edunov, S., Scialom, T.: Llama 2: Open Foundation and Fine-Tuned Chat
[Models. arXiv. arXiv:2307.09288 [cs] (2023). http://arxiv.org/abs/2307.09288](http://arxiv.org/abs/2307.09288)
Accessed 2024-01-10
[16] Haller, S., Aldea, A., Seifert, C., Strisciuglio, N.: Survey on Automated Short
Answer Grading with Deep Learning: from Word Embeddings to Transformers.
[arXiv. arXiv:2204.03503 [cs] (2022). https://doi.org/10.48550/arXiv.2204.03503](https://doi.org/10.48550/arXiv.2204.03503)
[. http://arxiv.org/abs/2204.03503 Accessed 2023-11-15](http://arxiv.org/abs/2204.03503)
[17] Kiros, R., Zhu, Y., Salakhutdinov, R., Zemel, R.S., Torralba, A., Urtasun, R.,
[Fidler, S.: Skip-Thought Vectors. arXiv. arXiv:1506.06726 [cs] (2015). https://](https://doi.org/10.48550/arXiv.1506.06726)
[doi.org/10.48550/arXiv.1506.06726 . http://arxiv.org/abs/1506.06726 Accessed](https://doi.org/10.48550/arXiv.1506.06726)
2023-11-26
25
-----
[18] Bonthu, S., Sripada, R.S., Prasad, M.: Automated Short Answer Grading
[Using Deep Learning: A Survey, pp. 61–78 (2021). https://doi.org/10.1007/](https://doi.org/10.1007/978-3-030-84060-0_5)
[978-3-030-84060-0 5](https://doi.org/10.1007/978-3-030-84060-0_5)
[19] Chen, B., West, M., Zilles, C.: Peer-grading ”explain in plain english”: A
bayesian calibration method for categorical answers. In: Proceedings of the
53rd ACM Technical Symposium on Computer Science Education - Volume 1.
SIGCSE 2022, pp. 133–139. Association for Computing Machinery, New York,
[NY, USA (2022). https://doi.org/10.1145/3478431.3499409 . https://doi.org/10.](https://doi.org/10.1145/3478431.3499409)
[1145/3478431.3499409](https://doi.org/10.1145/3478431.3499409)
[20] Azad, S., Chen, B., Fowler, M., West, M., Zilles, C.: Strategies for deploying unreliable ai graders in high-transparency high-stakes exams. In: Artificial Intelligence
in Education: 21st International Conference, AIED 2020, Ifrane, Morocco, July
6–10, 2020, Proceedings, Part I 21, pp. 16–28 (2020). Springer
[21] Lan, A.S., Vats, D., Waters, A.E., Baraniuk, R.G.: Mathematical Language Processing: Automatic Grading and Feedback for Open Response Mathematical
Questions. In: Proceedings of the Second (2015) ACM Conference on Learning @
Scale. L@S ’15, pp. 167–176. Association for Computing Machinery, New York,
[NY, USA (2015). https://doi.org/10.1145/2724660.2724664 . https://dl.acm.org/](https://doi.org/10.1145/2724660.2724664)
[doi/10.1145/2724660.2724664 Accessed 2023-11-15](https://dl.acm.org/doi/10.1145/2724660.2724664)
[22] Shen, J.T., Yamashita, M., Prihar, E., Heffernan, N., Wu, X., Graff, B., Lee, D.:
MathBERT: A Pre-trained Language Model for General NLP Tasks in Mathe[matics Education. arXiv. arXiv:2106.07340 [cs] (2023). https://doi.org/10.48550/](https://doi.org/10.48550/arXiv.2106.07340)
[arXiv.2106.07340 . http://arxiv.org/abs/2106.07340 Accessed 2023-08-17](https://doi.org/10.48550/arXiv.2106.07340)
[23] Wu, Y., Jiang, A.Q., Li, W., Rabe, M.N., Staats, C., Jamnik, M., Szegedy, C.:
Autoformalization with Large Language Models. arXiv. arXiv:2205.12615 [cs]
[(2022). https://doi.org/10.48550/arXiv.2205.12615 . http://arxiv.org/abs/2205.](https://doi.org/10.48550/arXiv.2205.12615)
[12615 Accessed 2024-03-25](http://arxiv.org/abs/2205.12615)
[24] Li, T.W., Hsu, S., Fowler, M., Zhang, Z., Zilles, C., Karahalios, K.: Am i wrong, or
is the autograder wrong? effects of ai grading mistakes on learning. In: Proceedings
of the 2023 ACM Conference on International Computing Education Research Volume 1. ICER ’23, pp. 159–176. Association for Computing Machinery, New
[York, NY, USA (2023). https://doi.org/10.1145/3568813.3600124 . https://doi.](https://doi.org/10.1145/3568813.3600124)
[org/10.1145/3568813.3600124](https://doi.org/10.1145/3568813.3600124)
[25] Lee, M.K.: Understanding perception of algorithmic decisions: Fairness, trust,
and emotion in response to algorithmic management. Big Data & Soci[ety 5(1), 2053951718756684 (2018) https://doi.org/10.1177/2053951718756684 .](https://doi.org/10.1177/2053951718756684)
Publisher: SAGE Publications Ltd. Accessed 2024-05-16
[26] Castelo, N., Bos, M.W., Lehmann, D.R.: Task-Dependent Algorithm Aversion.
[Journal of Marketing Research 56(5), 809–825 (2019) https://doi.org/10.1177/](https://doi.org/10.1177/0022243719851788)
[26](https://doi.org/10.1177/0022243719851788)
-----
[0022243719851788 . Publisher: SAGE Publications Inc. Accessed 2024-05-16](https://doi.org/10.1177/0022243719851788)
[27] Hsu, S., Li, T.W., Zhang, Z., Fowler, M., Zilles, C., Karahalios, K.: Attitudes surrounding an imperfect ai autograder. In: Proceedings of the 2021 CHI Conference
on Human Factors in Computing Systems. CHI ’21. Association for Comput[ing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3411764.](https://doi.org/10.1145/3411764.3445424)
[3445424 . https://doi.org/10.1145/3411764.3445424](https://doi.org/10.1145/3411764.3445424)
[28] Poulsen, S., Gertner, Y., Cosman, B., West, M., Herman, G.L.: Efficiency of
learning from proof blocks versus writing proofs. In: Proceedings of the 54th ACM
Technical Symposium on Computer Science Education V. 1, pp. 472–478 (2023)
[29] Poulsen, S., Chen, H., Gertner, Y., Cosman, B., West, M., Herman, G.L.: Measuring the Impact of Distractors on Student Learning Gains while Using Proof
Blocks (2023)
[30] Poulsen, S., Gertner, Y., Chen, H., Cosman, B., West, M., Herman, G.L.: Disentangling the learning gains from reading a book chapter and completing proof
blocks problems. In: Proceedings of the 55th ACM Technical Symposium on
Computer Science Education V. 1 (2024)
[31] Azerbayev, Z., Schoelkopf, H., Paster, K., Santos, M.D., McAleer, S., Jiang,
A.Q., Deng, J., Biderman, S., Welleck, S.: Llemma: An Open Language Model
[For Mathematics. arXiv. arXiv:2310.10631 [cs] (2023). https://doi.org/10.48550/](https://doi.org/10.48550/arXiv.2310.10631)
[arXiv.2310.10631 . http://arxiv.org/abs/2310.10631 Accessed 2023-11-29](https://doi.org/10.48550/arXiv.2310.10631)
[[32] OpenAI: Embeddings. Accessed: January 2024 (2023). https://platform.openai.](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings)
[com/docs/guides/embeddings/what-are-embeddings](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings)
[33] Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T.,
Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito,
Z., Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., Chintala, S.: Pytorch: An imperative style, high-performance deep learning library.
In: Advances in Neural Information Processing Systems 32, pp. 8024–8035. Cur[ran Associates, Inc., Vancuver, Canada (2019). http://papers.neurips.cc/paper/](http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf)
[9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf](http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf)
[34] Fowler, M., Chen, B., Azad, S., West, M., Zilles, C.: Autograding ”Explain in
Plain English” questions using NLP. In: Proceedings of the 52nd ACM Technical
Symposium on Computer Science Education, pp. 1163–1169. ACM, Virtual Event
[USA (2021). https://doi.org/10.1145/3408877.3432539 . https://dl.acm.org/doi/](https://doi.org/10.1145/3408877.3432539)
[10.1145/3408877.3432539 Accessed 2023-12-19](https://dl.acm.org/doi/10.1145/3408877.3432539)
[35] West, M., Herman, G.L., Zilles, C.: Prairielearn: Mastery-based online problem
solving with adaptive scoring and recommendations driven by machine learning.
In: 2015 ASEE Annual Conference & Exposition, pp. 26–1238126123814. ASEE
Conferences, Seattle, Washington (2015). https://peer.asee.org/24575
27
-----
[36] Kleij, F.M., Feskens, R.C.W., Eggen, T.J.H.M.: Effects of Feedback in a
Computer-Based Learning Environment on Students’ Learning Outcomes: A
[Meta-Analysis. Review of Educational Research 85(4), 475–511 (2015) https://](https://doi.org/10.3102/0034654314564881)
[doi.org/10.3102/0034654314564881 . Publisher: American Educational Research](https://doi.org/10.3102/0034654314564881)
Association. Accessed 2024-02-26
[37] Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., K¨uttler, H.,
Lewis, M., Yih, W.-t., Rockt¨aschel, T., Riedel, S., Kiela, D.: Retrieval-Augmented
Generation for Knowledge-Intensive NLP Tasks. arXiv. arXiv:2005.11401 [cs]
[(2021). https://doi.org/10.48550/arXiv.2005.11401 . http://arxiv.org/abs/2005.](https://doi.org/10.48550/arXiv.2005.11401)
[11401 Accessed 2024-05-28](http://arxiv.org/abs/2005.11401)
[38] Ha, T., Kim, S.: Improving Trust in AI with Mitigating Confirmation Bias: Effects
of Explanation Type and Debiasing Strategy for Decision-Making with Explainable AI. International Journal of Human–Computer Interaction 0(0), 1–12 (2023)
[https://doi.org/10.1080/10447318.2023.2285640 . Publisher: Taylor & Francis](https://doi.org/10.1080/10447318.2023.2285640)
eprint: https://doi.org/10.1080/10447318.2023.2285640. Accessed 2024-05-16
28
-----
| [
"Seth, Poulsen",
"Chenyan, Zhao",
"Mariana, Silva"
] | 2024-06-11T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2406.10268 | https://arxiv.org/abs/2406.10268 | https://www.semanticscholar.org/paper/68db729db61d3b96357b237c7d0da28ebe80c71f |
Automated Completion of Statements and Proofs in Synthetic Geometry: an Approach based on Constraint Solving | Conjecturing and theorem proving are activities at the center of mathematical practice and are difficult to separate. In this paper, we propose a framework for completing incomplete conjectures and incomplete proofs. The framework can turn a conjecture with missing assumptions and with an under-specified goal into a proper theorem. Also, the proposed framework can help in completing a proof sketch into a human-readable and machine-checkable proof. Our approach is focused on synthetic geometry, and uses coherent logic and constraint solving. The proposed approach is uniform for all three kinds of tasks, flexible and, to our knowledge, unique such approach. | null | # Automated Completion of Statements and Proofs in Synthetic Geometry: an Approach based on Constraint Solving
Salwa Tabet Gonzalez
UMR 7357 CNRS
University of Strasbourg
Pˆole API, Bd S´ebastien Brant
BP 10413
67412 Illkirch, France
```
[email protected]
```
Predrag Janiˇci´c
Department for Computer Science
Faculty of Mathematics
University of Belgrade
Studentski trg 16
11000 Belgrade, Serbia
```
[email protected]
```
Julien Narboux
UMR 7357 CNRS
University of Strasbourg
Pˆole API, Bd S´ebastien Brant
BP 10413
67412 Illkirch, France
```
[email protected]
```
Conjecturing and theorem proving are activities at the center of mathematical practice and are difficult to separate. In this paper, we propose a framework for completing incomplete conjectures
and incomplete proofs. The framework can turn a conjecture with missing assumptions and with an
under-specified goal into a proper theorem. Also, the proposed framework can help in completing a
proof sketch into a human-readable and machine-checkable proof. Our approach is focused on synthetic geometry, and uses coherent logic and constraint solving. The proposed approach is uniform
for all three kinds of tasks, flexible and, to our knowledge, unique such approach.
## 1 Introduction
Automated theorem provers take as input the formal statement of a conjecture in a theory described by
axioms and lemmas, and try to generate a proof or a counter-example for this conjecture. In the field
of geometry, several efficient automated theorem proving approaches have been developed, including
algebraic ones such as Wu’s method, Gr¨obner bases method, and semi-synthetic methods such as the
area method. In these approaches, typically, the conjecture and the axioms being considered are fixed.
However, in mathematical practice, in the context of education and also in mathematical research, the
conjecturing and proving activities are not separated but interleaved. The practitioner may try to prove a
statement which is valid only assuming some implicit or unknown assumptions, while the list of lemmas
and theorem which can be used may not be complete. In education, for some kind of exercises, a precise
formulation of the statement to be proved is also left to the student, with questions such as: “What is
the nature of the quadrilateral ABCD?”. Hence, the conjecture can contain unknown assumptions called
_abducts, and the goal may be not completely specified. One may also ask for a proof using a particular_
theorem or an intermediate fact, i.e., a proof partially specified using constraints specifying some proof
steps.
In this paper, we consider the problems of (simultaneously) completing (a) the assumptions of the
conjecture; (b) the goal of the conjecture; (c) a proof sketch for the conjecture. The completion process
should lead to a proof that is both machine-checkable and human-readable. Because we aim at producing intelligible and readable proofs, with a similar level of granularity as paper-and-pencil proofs, our
approach is logic-based, uses a fragment of first-order logic called coherent logic, and is focused on
synthetic geometry (in contrast to algebraic methods). Our approach for dealing with partial conjectures
and partial proofs is implemented as an extension of the automated theorem prover Larus developed
previously [14]. The approach is uniform for all three kinds of completion tasks, flexible and, to our
knowledge, unique such approach.
P. Quaresma and Z. Kov´acs (Ed.): Automated Deduction
in Geometry 2023 (ADG 2023).
[EPTCS 398, 2024, pp. 21–37, doi:10.4204/EPTCS.398.6](http://dx.doi.org/10.4204/EPTCS.398.6)
-----
_H_
_D_ _G_
_C_
_F_
_E_
_G_
_F_
_E_ _H_
_G_
_F_
Figure 1: Illustrations for five problems related to Varignon’s theorem, respectively: Problem 1; Problem
2; Problem 3.
We list five high-school level synthetic geometry problems related to Varignon’s theorem (Figure 1),
that we will try to solve using our approach.
**Problem 1 (Fully specified statement) Consider a quadrilateral ABCD, let E, F, G and H be the mid-**
points of AB, BC, CD, DA respectively. Prove that the quadrilateral EFGH is a parallelogram
(assuming that there are no two sides that are aligned).
**Problem 2 (First inverse problem) Consider a quadrilateral ABCD, let E, F, and G be the midpoints**
of AB, BC and CD respectively. Let H be a point. Under which assumption is the quadrilateral
_EFGH a parallelogram?_
**Problem 3 (Second inverse problem) Consider a quadrilateral ABCD, let E, F, G and H be the mid-**
points of AB, BC, CD, DA respectively. Under which assumption is the quadrilateral EFGH a
rectangle?
**Problem 4 (Partially specified goal) Consider a quadrilateral ABCD, let E, F, G and H be the mid-**
points of AB, BC, CD, DA respectively. What is the nature of the quadrilateral EFGH?
**Problem 5 (Partially specified proof) Consider a quadrilateral ABCD, let E, F, G and H be the mid-**
points of AB, BC, CD, DA respectively. We have that EG = FH. Prove that EFGH is a rectangle
using the axiom “If the diagonals of a parallelogram are congruent, then it’s a rectangle”.
The above examples are inspired by exercises given in a teacher training session. A more detailed
discussion about how these examples can be used in a didactic context, issues related to the formalization
can be found in [11, 19]
## 2 Background
This section provides some necessary background information on a fragment of first-order logic called
coherent logic that our approach uses. There are several automated provers for coherent logic, including
Larus, which is based on “theorem proving as constraint solver” paradigm.
**2.1** **Coherent Logic**
A formula of first-order logic is said to be coherent if it has the following form:
_A0(⃗x)_ _∧_ _..._ _∧_ _An−1(⃗x) ⇒∃⃗y(B0(⃗x,⃗y)_ _∨_ _..._ _∨_ _Bm−1(⃗x,⃗y))_
-----
where universal closure is assumed, and where _⃗x denotes a sequence of variables x0,_ _x1,...,_ _xk_ 1; Ai (for
_−_
0 ≤ _i ≤_ _n_ _−_ 1) denotes an atomic formula (involving zero or more variables from⃗x); _⃗y denotes a sequence_
of variables y0, _y1,...,_ _yl_ 1; B j (for 0 _j_ _m_ 1) denotes a conjunction of atomic formulae (involving
_−_ _≤_ _≤_ _−_
zero or more of the variables from⃗x and⃗y) [14]. If there are no formulae Ai, then the left-hand side of the
implication is assumed to be ⊤. If there are no formulae B j, then the right-hand side of the implication
is assumed to be ⊥. There are no function symbols with arity greater than zero. Coherent formulae do
not involve the negation connective. A coherent theory is a set of sentences, axiomatized by coherent
formulae, and closed under derivability. A number of theories and theorems can be formulated directly
and simply in coherent logic (CL). In addition, any first-order theory can be translated into CL, possibly
with additional predicate symbols [12, 21]. Synthetic geometry can be expressed easily using CL. For
example, the central part of axioms system of Euclid (as formalized by Beeson et al. [3]), or Hilbert
(as formalized by Braun et al. [6]), or Tarski [26] can be expressed in first-order logic without function
symbols, and the axioms are mostly in CL form.
Translation of FOL formulae into CL involves elimination of the negation connectives: negations
can be kept in place and new predicates symbols for corresponding sub-formula have to be introduced,
or negations can be pushed down to atomic formulae [21]. In the latter case, for every predicate symbol
_R (that appears in negated form), a new symbol R is introduced that stands for ¬R, and the following_
axioms are introduced: ∀⃗x(R(⃗x) _∧_ _R(⃗x) ⇒⊥), ∀⃗x(R(⃗x)_ _∨_ _R(⃗x))._
In contrast to resolution-based theorem proving, in forward reasoning for CL, the conjecture being
proved is kept unchanged and proved without using refutation, Skolemization and clausal form. Thanks
to this, CL is suitable for producing human-readable synthetic proofs and also machine verifiable proofs
[4, 12]. The problem of provability in CL is semi-decidable. CL admits a simple proof system, a sequentbased variant is as follows [27]:
Γ, _ax,_ _A0(⃗a),...,_ _An−1(⃗a),_ _B0(⃗a,[⃗]b)_ _∨_ _..._ _∨_ _Bm−1(⃗a,[⃗]b) ⊢_ _P_
MP
Γ, _ax,_ _A0(⃗a),...,_ _An−1(⃗a) ⊢_ _P_
Γ, _B0(⃗c) ⊢_ _P_ _..._ Γ, _Bm−1(⃗c) ⊢_ _P_ QEDcs (case split)
Γ, _B0(⃗c)_ _∨_ _..._ _∨_ _Bm−1(⃗c) ⊢_ _P_
QEDas (assumption)
Γ, _Bi(⃗a,[⃗]b) ⊢∃⃗y(B0(⃗a,⃗y)_ _∨_ _..._ _∨_ _Bm−1(⃗a,⃗y))_
Γ, _⊥⊢_ _P_ [QEDefq][ (][ex falso quodlibet][)]
In the rules given above, it is assumed: ax is a formula A0(⃗x) ∧ _... ∧_ _An−1(⃗x) ⇒∃⃗y(B0(⃗x,⃗y) ∨_ _... ∨_
_Bm−1(⃗x,⃗y));[1]_ _⃗a,_ _[⃗]b, ⃗c denote sequences of constants (possibly of length zero); in the rule MP (extended_
_modus ponens),_ _[⃗]b are fresh constants; ⃗x and ⃗y denote sequences of variables (possibly of length zero);_
_Ai(⃗x) (respectively Bi(⃗x,⃗y)) have no free variables other than from ⃗x (respectively ⃗x and ⃗y); Ai(⃗a) are_
ground atomic formulae; Bi(⃗a,[⃗]b) and Bi(⃗c) are conjunctions of ground atomic formulae; Φ denotes the
list of conjuncts in Φ if Φ is conjunction, and otherwise Φ itself. In the proving process, the rules are
1Notice the hidden link between the formulae Bi(⃗a,[⃗]b) from the rule MP and the formula ax: the formulae Bi(⃗a,[⃗]b) from the
rule are instances of the formulae Bi(⃗x,⃗y) from ax.
-----
read from bottom to top, i.e., by a rule application one gets the contents (new sub-goals) above the line.
For a set of coherent axioms AX and the statement A0(⃗x) ∧ _... ∧_ _An−1(⃗x) ⇒∃⃗y(B0(⃗x,⃗y) ∨_ _... ∨_
_Bm−1(⃗x,⃗y)) to be proved, within the above proof system one has to derive the following sequent (where_ _⃗a_
denotes a sequence of new symbols of constants): AX, _A0(⃗a),...,_ _An−1(⃗a) ⊢∃⃗y(B0(⃗a,⃗y)∨...∨_ _Bm−1(⃗a,⃗y))._
Notice that, in the above proof system, case split may occur only at the end of a (sub)proof. However,
it is not a substantial restriction: any proof with unrestricted use of case split can be transformed to such
form.
**2.2** **Theorem Proving as Constraint Solving and the Larus System**
“Theorem proving as constraint solving” is a paradigm for automated theorem proving recently proposed
[14]. In contrast to common automated theorem proving approaches, in which the search space is a set
of some formulae and what is sought is again a (goal) formula, this new approach is based on searching
for a proof (of a given length) as a whole. Namely, a proof of a formula in a fixed logical setting can be
encoded as a sequence of natural numbers obeying some constraints. A suitable solver can find such a
sequence and from that sequence a sought proof can be reconstructed. This approach is implemented in
C++, within an open-source prover Larus,[2] specialized in proofs in coherent logic and using SAT, SMT,
and CSP solvers for solving sets of constraints. Larus can generate readable, human understandable
proofs in natural language and also machine-verifiable proofs for the interactive provers Coq, Isabelle,
and Mizar.
Each CL proof consists of several proof steps, while each of them has one of the following kinds
(with obvious meaning): ASSUMPTION, MP, FIRSTCASE, SECONDCASE, QEDBYCASES, QED
BYASSUMPTION, QEDBYEFQ. The information relevant for MP steps include: AxiomApplied, From
(the ordinal numbers of proof steps justifying premises of the axiom applied), Instantiation (of the
variables in the axiom), Contents (the atoms in formula in the proof step), etc. Nesting denotes the
nesting of the proof step (the nesting of the first step is 1).
The proof can be represented by a sequence of numbers, meeting some constraints (that correspond
to definitions of inference steps given in Section 2.1). For instance, if the proof step s is of the kind
QEDBYEFQ, then the following conditions must hold (given almost in verbatim as in our C++ code):[3]
1. StepKind (s) = QEDBYEFQ;
2. s > 0;
3. Contents (s _−_ 1)(0) = ⊥;
4. Goal (s);
5. Nesting (s) = Nesting (s _−_ 1).
The above conditions can be understood in the following way: if there is a proof of the given conjecture,
the proof step s in that proof is of the kind QEDBYEFQ iff the natural number StepKind (s) equals the
code for QEDBYEFQ, s > 0 (since there must be a previous step), the contents of the previous proof
step is ⊥, the contents of the step is the goal itself, and the nesting of the steps s _−_ 1 and s is the same.
Each proof step has one of the listed kinds and meet corresponding conditions. There are also some
additional, global constraints, like that the last proof step has Nesting equal 1.
[2https://github.com/janicicpredrag/Larus](https://github.com/janicicpredrag/Larus)
3The corresponding C++ implementation is an improved version of the implementation presented earlier [14].
-----
Larus works in the following way. If there is a set of axioms, a conjecture, and a proof length, a
corresponding proof can be represented as a sequence of natural numbers, still unknown, so they will be
represented by variables V . The constraints that have to be met for each proof step and for the proof as
a whole can be expressed in terms of these variables V . If a solver can find a model for the constraint,
from it the proof in logical terms can be reconstructed. All constraints involved are linear constraints
over natural numbers. Since linear arithmetic is decidable, decision procedures for it can decide, for
each input constrains, whether or not it has a model. For this purpose, Larus can use SAT, SMT, and CSP
solvers. For input, Larus uses axioms and conjectures stored in a file in TPTP/fof format.
## 3 Abducts and Completing Assumptions
There are three major types of logical inference: induction, deduction, and abduction. The concept of
abduction has been introduced by Peirce [20]. In deduction, everything inferred is necessarily true, while
it is not the case with the remaining two types of inference. Induction tries to infer general rules based
on individual instances. The aim of abduction is to produce additional hypotheses to explain observed
facts. Abduction has a wide spectrum of implicit or explicit applications – in everyday life, in education,
and in scientific reasoning, including in building mathematical theories, or in software verification. One
definition of abduct is given below.
**Definition 1 Given a theory T and a formula G (the goal to be proved), such that T ̸|= G, an explanations**
_or abduct is a formula A meeting conditions: T,_ _A |= G and T,_ _A ̸|= ⊥._
It is clear that some abducts are not interesting, so there are often some additional restrictions given.
There is no general agreement about such restrictions, but two types are most usual: syntactical restric_tions (abducts should be of a specific syntactical form) and minimality restrictions (for any other abduct_
_A[′], if T,_ _A |= A[′]_ then A ≡ _A[′]). It is reasonable to ask that A is not G, as it is trivial. Some authors also add_
stronger a restriction that A ̸|= G (i.e., at least one axiom of T has to be used to prove G).
**Approaches for Computing Abducts.** Various algorithms to produce different kind of abducts have
been developed [1]. In abductive logic programming, techniques for abductive reasoning are developed
in the context of logic programming. Rules are considered to be Horn clauses [8]. According to Russo
et al. [25], some systems assume that predicate symbols appearing in abducts do not appear in the conclusion of any rule and that negation does not appear in the conclusion of any rule. This restriction is
not realistic in the context of geometry. In our example, we want to accept geometric predicate symbols
both in abducts, and in the assumptions and conclusion of theorems. Some approaches are based on
Robinson’s resolution algorithm, extended such that when no more clauses can be produced, the atomic
clauses are considered as a potential abduct and consistency if checked [17]. There are also approaches
developed for the context of SMT solving, dealing with decidable theories like linear arithmetic [10, 23]
In the context of geometry, some algebraic algorithms can generate additional assumptions for the
statement to be true. For example, Wu’s method [28] can produce non-degeneracy conditions. Algebraic
methods can also be used to generate more general abducts [22]. These methods are more efficient than
ours, but more specific so cannot be used for arbitrary geometric theories. Also, they cannot generate
readable proofs. Moreover, expressing algebraic non-degeneracy conditions in simple geometrical terms
is not easy and not always possible [7].
-----
**Abduction in Synthetic Euclidean Geometry.** In this paper, the theory T from Definition 1 is a synthetic Euclidean geometry. In this context, automated finding of proofs allowing abducts may have
several applications. For instance, an automated system may help a student or a researcher who tries
to prove (or formalize) a theorem with a missing assumption. Barbosa et al. have proposed such goal
(although not for geometry) in the context of interactive proof assistants where conjectures are sent to an
SMT solver [2].
Non-degeneracy conditions are often overlooked and missing in informal geometry statements. Abductive reasoning is also a task which can be asked explicitly to students. The answer expected by the
teacher for Problem 2 is that H should be the midpoint of AD.
**Finding Abducts using Larus.** In this paper, we restrict consideration of abduction only to coherent
logic and only to abducts that are conjunctions of ground atomic formulae. Larus was not implemented
with abduction in mind, yet implementation of support for abduction turned out to be very simple, almost
trivial, and took less than 100 lines of C++ code. In order to find abducts using Larus, we treat them as a
special case of proof steps, in the main proof branch, just after assumptions. We have to add constraints
on what such an abduct can be:
1. the abduct is treated as an assumption;
2. the nesting of the abduct equals 1;
3. the abduct is an atomic formula (no branching);
4. the predicate symbol is one of the predicate symbols in the signature;
5. the arguments are among existing symbols of constants;
6. the abduct is not the goal itself;
7. the abduct is not ⊥.
The given conditions may be written in the following way, assuming that the abduct is placed in i-th
step of the proof:
1. StepKind (i) =ASSUMPTION
2. Nesting (i) = 1
3. Cases (i) = false
4. ContentsPredicate (i, 0) < sizeof (Signature)
5. for each argument j (up to maximal arity): ContentsArgument (i, 0, j) < sizeof (Constants)
6. Goal (i) = false
7. ContentsPredicate (i, 0) ̸= ⊥
One can also choose a number of abducts, each leading to the constraints given above. With such
additional constraints for each abduct (for additional proof steps in specific positions in a proof sought),
with a given set of axioms and a conjecture, and with a concrete proof length, we run Larus as usual.
The solving/proving process is the same as without abducts: the constraint solver finds a way to specify
a full proof, including the abducts, i.e., under-specified assumptions.
In the above list of conditions, the last two do not follow the basic definition of abduct. Like in some
other variants of the definition, the abduct may not be equal to the goal atom because such abducts are
trivial. Also, the abduct may not be equal to ⊥, since it is inconsistent. It is important to discard other
-----
inconsistent abducts early, so we add one more restriction: the proof of T, _A |= G should not end with_
QEDBYEFQ. Some constructed abducts may still be inconsistent with other assumptions, and we use
an external, more efficient automatic theorem prover, Vampire [16], to discard such abducts.
**Example 1 For the first inverse problem (Problem 2 from Section 1), Larus produces two consistent,**
_symmetric abducts (the proof obtained with the first abduct is presented in Appendix 7.2):_
- “H is the midpoint of AD”
- “H is the midpoint of DA”
**Example 2 For the second inverse problem (Problem 3 from Section 1), Larus produces more than 150**
_consistent abducts, most of which give degenerate cases, hence are less interesting. Apart from such_
_abducts, we obtained the following abducts and their symmetric variants (the proof with the first abduct_
_is presented in Appendix 7.3):_
- “the diagonals HF and EG are congruent”
- “[̸] _FGH is a right angle”_
- “[̸] _EHG is a right angle”_
- “[̸] _HEF is a right angle”_
- “[̸] _EFG is a right angle”_
## 4 Deducts and Completing Goals
Non-trivial first-order logic theories have infinite number of theorems. Approaches based on refutation
cannot be used with under-specified goals and, hence, cannot be used for completing them. In principle,
a controlled forward-reasoning (for instance, based on some kind of breadth-first search) can enumerate
all theorems of a theory. However, such a systematic approach can be hardly useful for some practical
applications, like looking for possible conjectures of a specific form. Our framework allows (but does not
require) specifying partially the form of the goal: for instance, one may specify the dominant predicate
symbol in the goal atom, or some of its arguments.
**Finding Deducts in Synthetic Euclidean Geometry.** In the field of geometry, deduct candidates can
also be guessed based on an illustration, giving a concrete model. However, these deduct candidates still
have to be verified i.e., proved. Potential deducts could also be listed as large disjunctions of atomic
formulae, but this method does not scale when the list of potential deducts is too long.
**Finding Deducts using Larus.** In Larus, if the goal is given i.e., fully specified, corresponding constraints are added to the full constraint representing a proof sought. Let us assume that the final step
of the proof is (some fixed) n and, for simplicity, let us assume that the goal is just a single atom. The
corresponding constraint then includes:
1. Nesting (n) = 1
2. Cases (n) = false
3. ContentsPredicate (n, 0) = the goal predicate symbol
-----
4. for each argument j (except for existentially quantified variables):
`ContentsArgument (n,` 0, j) = the argument from the given goal instantiated.
If the goal is under-specified, for instance, if the predicate symbol is not given (it is given as _ in the
TPTP file), the third condition is just ignored. The same holds for the arguments.[4] During the solving
process, if there is a model, these slots are filled-in by some concrete values, giving a concrete goal.
Overall, support for finding under-specified deducts is very simple. The current implementation finds
one possible deduct, but it can be extended to list all possible deducts, similarly as for abducts (as
explained in Section 3).
**Example 3 For Problem 4 from Section 1, Larus produces the deduct “EF ∥** _GH”. The proof obtained_
_with such deduct is presented in Appendix 7.4._
## 5 Hints and Completing Proofs
Informal proofs, for instance from textbooks, are often partial and incomplete. They may even provide
only a part of a full proof, or some instructions like for filling gaps by analogy. Reconstructing proofs
using such hints is very important task, as discussed by Gowers and Hales [13]: “One dream was to
develop an automated assistant that would function at the level of a helpful graduate student. The senior
mathematician would suggest the main lines of the proof, and the automated grad student would fill in
the details.”
**Completing Proofs in Synthetic Euclidean Geometry.** In the context of geometry, completing proofs
could be interesting either as a way to render the formalization process simpler (automation would bring
in all the details that are overlooked in pen and paper proofs), or as a tool working behind the scene
for providing guidance for what could be the next step in the proof. This objective has been studied
by Richard et al. [24]. In geometry, hints can also be based on some observations from an associated
illustration.
**Completing Proofs using Larus.** Larus can be instructed to look for a proof of a given conjecture
(also possibly only partially specified) meeting some conditions (that we call “hints”) [14]. Therefore,
Larus can try, for instance, to reconstruct a proof given only in outline (like proofs in textbooks). Larus
use hints in a much more general way than just splitting the problem into sub-problems: for instance,
some hint may be used in just one proof branch and cannot be proved itself. Hints do not have to be
ordered (one can ask for a proof using X and Y in no particular order), they can be vague, imposed only
by partial constraints (“find a proof that uses this particular predicate symbol”, or “find a proof using
some specific axiom”, without the way it is instantiated, etc.).
Completing proofs in Larus is supported similarly as for abducts and under-specified goals – by
modifying the corresponding constraints. The main difference is that abducts and incomplete goals
are under-specified, so some constraints have to be omitted, while partial proof introduce additional
constraints, on top of the common constraints that must be met by all proof steps. For expressing hints,
we slightly extended the language TPTP/fof to allow a simple but still quite expressible semantics. Some
kinds of hints (not all) are illustrated below.
4Actually, underspecified arguments can be also handled using existential quantification.
-----
```
fof(hintname0, hint, r(_,_), _, _).
fof(hintname1, hint, r(_,_), 5, _).
fof(hintname2, hint, _, 5, ax2(_,_)).
```
The first hint specifies that some proof step will have an atom of the form r(...,...). The second hint
specifies that the 5th proof step will have atom of the form r(...,...). The third hint specifies that in the
5th proof step the axiom ax2 is applied.
**Example 4 For Problem 5 from Section 1, Larus was able to find a proof around 20% faster with a**
_suitable hint presented in Appendix 7.5._
## 6 Conclusion and Future Work
In this paper we have shown how a prover using the “theorem proving as constraint solving” paradigm
can be extended such that it can complete partially specified conjectures and partially specified proofs.
This extension is simple, and the implementation update is very small. The completion algorithm is uniform, since all three completion tasks (completing assumptions, completing goals, completing proofs) are
handled in the same spirit – in terms of adding or deleting some constraints. To our knowledge, this approach is new, and we are not aware of other systems that can address all three sorts of completion tasks.
The presented approach is flexible as different variations of completion tasks can be supported. The
strength of this approach is also that it can generate both proofs that are human-readable and machinecheckable. The proposed framework has two main limitations. First, in current stage, it can deal only
with coherent logic, hence the theories cannot involve function symbols, which excludes geometry proof
that use (non-trivial) arithmetic. Second, the framework cannot deal with conjectures whose proofs are
long (say, longer than 50 proof steps).
To our knowledge, there is only one other approach in which some kind of proof is encoded, and
reconstructed from a model for the corresponding set of constraints – the approach in which rigid connection tableaux are encoded as SAT and SMT instances [9, 5, 18]. However, in this line of research,
neither machine verifiable or readable proofs, nor any of completion tasks are considered.
The presented work can be extended in several directions. One of our goals is to use our framework to
help transfer geometry knowledge from informal sources to proof assistants and between proof assistants,
while keeping its high-level structure. In informal sources, statements of theorems may be incomplete,
while proofs may be given just in outline. Still, using our approach such contents can be, at least in some
cases, completed and turned into a verifiable form. For transferring knowledge from a proof assistant,
one would need to go into its specifics, but only to grab (some) proof steps and make hints out of them.
We are still to explore these ideas on a larger scale, like one geometry textbook. In the same spirit as
the work proposed by Jiang et al. [15], our approach could be combined with large language models
to perform automatic formalization by extracting data from natural language proofs. More specific to
abduction, we are planning to make an in-depth comparison (both qualitative and quantitative) of our
tool to other tools for generating abducts.
**Acknowledgement.** The work related to this paper has been partially supported by the European Cost
project CA20111 EUROProofNet. The second author has been partially supported by the Ministry of
Science of Serbia contract 451-03-47/2023-01/200104.
-----
## References
[1] Aliseda Atocha (2006): ABDUCTIVE REASONING. Synthese Library 330, Kluwer Academic Publishers,
[Dordrecht, doi:10.1007/1-4020-3907-7.](http://dx.doi.org/10.1007/1-4020-3907-7)
[2] Haniel Barbosa, Chantal Keller, Andrew Reynolds, Arjun Viswanathan, Cesare Tinelli & Clark Barrett
(2023): An Interactive SMT Tactic in Coq using Abductive Reasoning. In: EPiC Series in Computing,
[94, EasyChair, pp. 11–22, doi:10.29007/432m. ISSN: 2398-7340.](http://dx.doi.org/10.29007/432m)
[3] Michael Beeson, Julien Narboux & Freek Wiedijk (2019): Proof-checking Euclid. Annals of Mathematics
_[and Artificial Intelligence 85(2-4), pp. 213–257, doi:10.1007/s10472-018-9606-x.](http://dx.doi.org/10.1007/s10472-018-9606-x)_
[4] Marc Bezem & Thierry Coquand (2005): Automating Coherent Logic. In Geoff Sutcliffe & Andrei Voronkov,
editors: 12th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning —
_[LPAR 2005, Lecture Notes in Computer Science 3835, Springer, pp. 246–260, doi:10.1007/11591191 18.](http://dx.doi.org/10.1007/11591191_18)_
[5] Jeremy Bongio, Cyrus Katrak, Hai Lin, Christopher Lynch & Ralph Eric McGregor (2008): _En-_
_coding First Order Proofs in SMT._ _Electron. Notes Theor. Comput. Sci. 198(2),_ pp. 71–84,
[doi:10.1016/j.entcs.2008.04.081.](http://dx.doi.org/10.1016/j.entcs.2008.04.081)
[6] Gabriel Braun & Julien Narboux (2012): From Tarski to Hilbert. In Tetsuo Ida & Jacques Fleuriot, editors: Post-proceedings of Automated Deduction in Geometry 2012, LNCS 7993, Springer, pp. 89–109,
[doi:10.1007/978-3-642-40672-0 7.](http://dx.doi.org/10.1007/978-3-642-40672-0_7)
[7] XueFeng Chen & DingKang Wang (2004): The Projection of Quasi Variety and Its Application on Geo_metric Theorem Proving and Formula Deduction. In Automated Deduction in Geometry, 4th International_
_[Workshop, ADG 2002, Lecture Notes in Computer Science 2930, Springer, pp. 21–30, doi:10.1007/978-3-](http://dx.doi.org/10.1007/978-3-540-24616-9_2)_
[540-24616-9 2.](http://dx.doi.org/10.1007/978-3-540-24616-9_2)
[8] Marc Denecker & Antonis C. Kakas (2002): Abduction in Logic Programming. In Computational Logic:
_Logic Programming and Beyond, Essays in Honour of Robert A. Kowalski, Part I, Lecture Notes in Computer_
_[Science 2407, Springer, pp. 402–436, doi:10.1007/3-540-45628-7 16.](http://dx.doi.org/10.1007/3-540-45628-7_16)_
[9] Todd Deshane, Wenjin Hu, Patty Jablonski, Hai Lin, Christopher Lynch & Ralph Eric McGregor (2007):
_Encoding First Order Proofs in SAT. In Automated Deduction - CADE-21, 21st International Conference on_
_[Automated Deduction, Lecture Notes in Computer Science 4603, Springer, pp. 476–491, doi:10.1007/978-](http://dx.doi.org/10.1007/978-3-540-73595-3_35)_
[3-540-73595-3 35.](http://dx.doi.org/10.1007/978-3-540-73595-3_35)
[10] Isil Dillig & Thomas Dillig (2013): Explain: A Tool for Performing Abductive Inference. In Computer Aided
_[Verification, Lecture Notes in Computer Science, Springer, pp. 684–689, doi:10.1007/978-3-642-39799-8 -](http://dx.doi.org/10.1007/978-3-642-39799-8_46)_
[46.](http://dx.doi.org/10.1007/978-3-642-39799-8_46)
[11] Viviane Durand-Guerrier, Paolo Boero, Nadia Douek, Susanna S. Epp & Denis Tanguay (2012): Examining
_the Role of Logic in Teaching Proof. In Proof and Proving in Mathematics Education, New ICMI Study_
_[Series 15, Springer, pp. 369–389, doi:10.1007/978-94-007-2129-6 16.](http://dx.doi.org/10.1007/978-94-007-2129-6_16)_
[12] Roy Dyckhoff & Sara Negri (2015): Geometrization of first-order logic. The Bulletin of Symbolic Logic 21,
[pp. 123–163, doi:10.1017/bsl.2015.7.](http://dx.doi.org/10.1017/bsl.2015.7)
[[13] Thomas Hales (2019): An argument for controlled natural languages in mathematics. Available at https:](https://jiggerwit.wordpress.com/2019/06/20/an-argument-for-controlled-natural-languages-in-mathematics/)
```
//jiggerwit.wordpress.com/2019/06/20/an-argument-for-controlled-natural-languages
-in-mathematics/.
```
[14] Predrag Janiˇci´c & Julien Narboux (2022): Theorem Proving as Constraint Solving with Coherent Logic.
_[Journal of Automated Reasoning 66(4), pp. 689–746, doi:10.1007/s10817-022-09629-z.](http://dx.doi.org/10.1007/s10817-022-09629-z)_
[15] Albert Q. Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik, Timoth´ee Lacroix,
Yuhuai Wu & Guillaume Lample (2023): Draft, Sketch, and Prove: Guiding Formal Theorem Provers with
_[Informal Proofs, doi:10.48550/arXiv.2210.12283. ArXiv:2210.12283 [cs].](http://dx.doi.org/10.48550/arXiv.2210.12283)_
[16] Laura Kov´acs & Andrei Voronkov (2013): First-Order Theorem Proving and Vampire. In Computer Aided
_Verification - 25th International Conference, CAV 2013, Lecture Notes in Computer Science 8044, Springer,_
[pp. 1–35, doi:10.1007/978-3-642-39799-8 1.](http://dx.doi.org/10.1007/978-3-642-39799-8_1)
-----
[17] P. Marquis (1991): Extending abduction from propositional to first-order logic. In Fundamentals of Artificial
_[Intelligence Research, Springer, doi:10.1007/3-540-54507-7 12.](http://dx.doi.org/10.1007/3-540-54507-7_12)_
[18] Ralph Eric McGregor (2011): Automated Theorem Proving Using SAT. PhD Thesis, Clarkson University.
[Available at https://search.proquest.com/openview/b87467cab0987f591010cf19dc554fa3/1?](https://search.proquest.com/openview/b87467cab0987f591010cf19dc554fa3/1?pq-origsite=gscholar&cbl=18750&diss=y)
```
pq-origsite=gscholar&cbl=18750&diss=y.
```
[19] Julien Narboux & Viviane Durand-Guerrier (2022): Combining pencil/paper proofs and formal proofs, a
_challenge for Artificial Intelligence and mathematics education. In: Mathematics Education in the Age of_
_[Artificial Intelligence, Mathematics Education in the Digital Era 17, Springer, doi:10.1007/978-3-030-86909-](http://dx.doi.org/10.1007/978-3-030-86909-0_8)_
[0 8.](http://dx.doi.org/10.1007/978-3-030-86909-0_8)
[20] Charles Peirce (1932): Collected papers of Charles Sanders Peirce. Belknap Press.
[21] Andrew Polonsky (2011): Proofs, Types and Lambda Calculus. Ph.D. thesis, University of Bergen.
[22] T. Recio & M. P. V´elez (1999): Automatic Discovery of Theorems in Elementary Geometry. J. Autom.
_[Reason. 23(1), pp. 63–82, doi:10.1023/A:1006135322108.](http://dx.doi.org/10.1023/A:1006135322108)_
[23] Andrew Reynolds, Haniel Barbosa, Daniel Larraz & Cesare Tinelli (2020): Scalable Algorithms for Ab_duction via Enumerative Syntax-Guided Synthesis._ In Automated Reasoning - 10th International Joint
_Conference, IJCAR 2020, Part I, Lecture Notes in Computer Science 12166, Springer, pp. 141–160,_
[doi:10.1007/978-3-030-51074-9 9.](http://dx.doi.org/10.1007/978-3-030-51074-9_9)
[24] Philippe R. Richard, Josep Maria Fortuny, Michel Gagnon, Nicolas Leduc, Eloi Puertas & Mich`ele TessierBaillargeon (2011): Didactic and theoretical-based perspectives in the experimental development of an in_[telligent tutorial system for the learning of geometry. ZDM 43(3), pp. 425–439, doi:10.1007/s11858-011-](http://dx.doi.org/10.1007/s11858-011-0320-y)_
[0320-y.](http://dx.doi.org/10.1007/s11858-011-0320-y)
[25] Alessandra Russo & Bashar Nuseibeh (2001): On The Use Of Logical Abduction In Software Engineering.
[In Handbook of Software Engineering and Knowledge Engineering, doi:10.1142/9789812389718 0037.](http://dx.doi.org/10.1142/9789812389718_0037)
[26] Wolfram Schwabh¨auser, Wanda Szmielew & Alfred Tarski (1983): Metamathematische Methoden in der
_[Geometrie. Springer. doi:10.1007/978-3-642-69418-9.](http://dx.doi.org/10.1007/978-3-642-69418-9)_
[27] Sana Stojanovi´c, Julien Narboux, Marc Bezem & Predrag Janiˇci´c (2014): A Vernacular for Coherent Logic.
In Intelligent Computer Mathematics, Lecture Notes in Computer Science 8543, Springer, pp. 388–403,
[doi:10.1007/978-3-319-08434-3 28.](http://dx.doi.org/10.1007/978-3-319-08434-3_28)
[28] Wen-Tsun Wu (1978): On the Decision Problem and the Mechanization of Theorem-Proving in Elementary
_Geometry. 21, Scientia Sinica, pp. 157–179._
## 7 Appendix
In this appendix, we provide a complete list of lemmas and axioms (in coherent logic form) used in
our examples, and the results obtained using Larus. The results were obtained on a PC computer with
Intel(R) Core(TM) i7-8565U CPU @ 1.80GHz processor running under Linux (the time spent should
give just a general picture of the efficiency of the system).
**7.1** **Problem 1: Varignon’s Theorem**
The TPTP file used for Problem 1 is the following:
```
fof(triangle_mid_par_strict, axiom, (! [A, B, C, P, Q] : ( ((~ col(A,B,C)) & midpoint(B,P
,C) & midpoint(A,Q,C)) => par(A,B,Q,P)))).
fof(lemma_par_trans, axiom, (! [A, B, C, D, E, F] : ((par(A,B,C,D) & par(C,D,E,F) & (~col
(A,B,E))) => par(A,B,E,F)))).
```
-----
```
fof(defparallelogram2,axiom, (! [A,B,C,D] : ((par(A,B,C,D) & par(A,D,B,C)) => ((pG(A,B,C,
D)))))).
fof(lemma_parallelNC,axiom, (! [A,B,C,D] : ((par(A,B,C,D)) => ((~ (col(A,B,C)) & ~ (col(A
,C,D)) & ~ (col(B,C,D)) & ~ (col(A,B,D))))))).
fof(lemma_parallelflip,axiom, (! [A,B,C,D] : ((par(A,B,C,D)) => ((par(B,A,C,D) & par(A,B,
D,C) & par(B,A,D,C)))))).
fof(lemma_parallelsymmetric,axiom, (! [A,B,C,D] : ((par(A,B,C,D)) => ((par(C,D,A,B)))))).
fof(midpoint_sym, axiom, (! [A, B, I] : (midpoint(A,I,B) => midpoint(B,I,A)))).
fof(lemma_tP_trans, axiom, (! [A, B, C, D, E, F] : ((tP(A,B,C,D) & tP(C,D,E,F)) => tP(A,
B,E,F)))).
fof(th_varignon,conjecture,(! [A,B,C,D,E,F,G,H] : (( (~(col(B,D,A))) & (~(col(B,D,C))) &
(~(col(A,C,B))) & (~(col(A,C,D))) & (~ (col(E,F,G))) & midpoint(A,E,B) & midpoint(B,F
,C) & midpoint(C,G,D) & midpoint(A,H,D)) => pG(E,F,G,H) ))).
```
If Larus is invoked as: ./larus -l100 -m8 (-l100 means the time limit is 100s, -m8 means that
we look for a proof with 8 or fewer steps), it produces the following proof in 2s:
Consider arbitrary a, b, c, d, e, f , g, h such that:
- ¬col(b, _d,_ _a),_
- ¬col(b, _d,_ _c),_
- ¬col(a, _c,_ _b),_
- ¬col(a, _c,_ _d),_
- ¬col(e, f _,_ _g),_
- b ̸= d,
- a ̸= c,
- midpoint(a, _e,_ _b),_
- midpoint(b, f _,_ _c),_
- midpoint(c, _g,_ _d),_
- midpoint(a, _h,_ _d)._
It should be proved that pG(e, f _,_ _g,_ _h)._
1. par(a, _c,_ _h,_ _g) (by MP, from_ _col(a,_ _c,_ _d), midpoint(c,_ _g,_ _d), midpoint(a,_ _h,_ _d) using axiom triangle mid par strict; instantiation:_
_¬_
_A_ _a, B_ _c, C_ _d, P_ _g, Q_ _h)_
_7→_ _7→_ _7→_ _7→_ _7→_
2. par(b, _d, f_ _,_ _g) (by MP, from_ _col(b,_ _d,_ _c), midpoint(c,_ _g,_ _d), midpoint(b, f_ _,_ _c) using axiom triangle mid par strict; instantiation:_
_¬_
_A_ _b, B_ _d, C_ _c, P_ _g, Q_ _f_ )
_7→_ _7→_ _7→_ _7→_ _7→_
3. par(a, _c,_ _e, f_ ) (by MP, from _col(a,_ _c,_ _b), midpoint(b, f_ _,_ _c), midpoint(a,_ _e,_ _b) using axiom triangle mid par strict; instantiation:_
_¬_
_A_ _a, B_ _c, C_ _b, P_ _f_ , Q _e)_
_7→_ _7→_ _7→_ _7→_ _7→_
4. par(b, _d,_ _e,_ _h) (by MP, from_ _col(b,_ _d,_ _a), midpoint(a,_ _h,_ _d), midpoint(a,_ _e,_ _b) using axiom triangle mid par strict; instantiation:_
_¬_
_A_ _b, B_ _d, C_ _a, P_ _h, Q_ _e)_
_7→_ _7→_ _7→_ _7→_ _7→_
5. par(e, f _,_ _g,_ _h) (by MP, from par(a,_ _c,_ _e, f_ ), par(a, _c,_ _h,_ _g),_ _col(e, f_ _,_ _g) using axiom lemma par trans; instantiation: A_ _e, B_
_¬_ _7→_ _7→_
_f_ , C _a, D_ _c, E_ _g, F_ _h)_
_7→_ _7→_ _7→_ _7→_
-----
6. par( _f_ _,_ _g,_ _h,_ _e) (by MP, from par(b,_ _d, f_ _,_ _g), par(b,_ _d,_ _e,_ _h), par(e, f_ _,_ _g,_ _h) using axiom lemma par trans; instantiation: A_ _f_, B
_7→_
_g, C_ _d, D_ _b, E_ _h, F_ _e)_
_7→_ _7→_ _7→_ _7→_ _7→_
7. pG(e, f _,_ _g,_ _h) (by MP, from par(e, f_ _,_ _g,_ _h), par( f_ _,_ _g,_ _h,_ _e) using axiom defparallelogram2; instantiation: A_ _e, B_ _f_, C _g,_
_7→_ _7→_ _7→_
_D_ _h)_
_7→_
8. Proved by assumption! (by QEDas)
**7.2** **Problem 2: First Inverse Problem**
The list of axioms used for the first inverse problem (Problem 2) is the same as in Section 7.1. Only the
conjecture is different – the assumption midpoint(A,H,D) is ommitted:
```
fof(th_varignon,conjecture,(! [A,B,C,D,E,F,G,H] : (( (~(col(B,D,A))) & (~(col(B,D,C))) &
(~(col(A,C,B))) & (~(col(A,C,D))) & (~ (col(E,F,G))) & (B != D) & (A != C) & midpoint
(A,E,B) & midpoint(B,F,C) & midpoint(C,G,D)) => pG(E,F,G,H) ))).
```
If Larus is invoked as: ./larus -l100 -m8 -b1 (-l100 means the time limit is 100s, -m8 means
that we look for a proof with 8 or fewer steps, -b1 means that we look for one atomic formula as an
abduct), it finds a first consistent abduct (after two inconsistent ones) and produces the following humanreadable proof in 3.26 seconds (the abduct found is highlighted):
Consider arbitrary a, b, c, d, e, f , g, h such that:
- ¬col(b, _d,_ _a),_
- ¬col(b, _d,_ _c),_
- ¬col(a, _c,_ _b),_
- ¬col(a, _c,_ _d),_
- ¬col(e, f _,_ _g),_
- b ̸= d,
- a ̸= c,
- midpoint(a, _e,_ _b),_
- midpoint(b, f _,_ _c),_
- midpoint(c, _g,_ _d)._
It should be proved that pG(e, f _,_ _g,_ _h)._
Abducts found:
- midpoint(d, _h,_ _a)_
1. par(a, _c,_ _e, f_ ) (by MP, from _col(a,_ _c,_ _b), midpoint(b, f_ _,_ _c), midpoint(a,_ _e,_ _b) using axiom triangle mid par strict; instantiation:_
_¬_
_A_ _a, B_ _c, C_ _b, P_ _f_ , Q _e)_
_7→_ _7→_ _7→_ _7→_ _7→_
2. par(b, _d, f_ _,_ _g) (by MP, from_ _col(b,_ _d,_ _c), midpoint(c,_ _g,_ _d), midpoint(b, f_ _,_ _c) using axiom triangle mid par strict; instantiation:_
_¬_
_A_ _b, B_ _d, C_ _c, P_ _g, Q_ _f_ )
_7→_ _7→_ _7→_ _7→_ _7→_
-----
3. par(b, _d,_ _e,_ _h) (by MP, from_ _col(b,_ _d,_ _a), midpoint(d,_ _h,_ _a), midpoint(a,_ _e,_ _b) using axiom triangle mid par strict; instantiation:_
_¬_
_A_ _b, B_ _d, C_ _a, P_ _h, Q_ _e)_
_7→_ _7→_ _7→_ _7→_ _7→_
4. par(a, _c,_ _h,_ _g) (by MP, from_ _col(a,_ _c,_ _d), midpoint(c,_ _g,_ _d), midpoint(d,_ _h,_ _a) using axiom triangle mid par strict; instantiation:_
_¬_
_A_ _a, B_ _c, C_ _d, P_ _g, Q_ _h)_
_7→_ _7→_ _7→_ _7→_ _7→_
5. par(e, f _,_ _g,_ _h) (by MP, from par(a,_ _c,_ _e, f_ ), par(a, _c,_ _h,_ _g),_ _col(e, f_ _,_ _g) using axiom lemma par trans; instantiation: A_ _e, B_
_¬_ _7→_ _7→_
_f_, C _a, D_ _c, E_ _g, F_ _h)_
_7→_ _7→_ _7→_ _7→_
6. par(e, _h,_ _g, f_ ) (by MP, from par(b, _d,_ _e,_ _h), par(b,_ _d, f_ _,_ _g), par(e, f_ _,_ _g,_ _h) using axiom lemma par trans; instantiation: A_ _e, B_
_7→_
_h, C_ _b, D_ _d, E_ _g, F_ _f_ )
_7→_ _7→_ _7→_ _7→_ _7→_
7. pG(e, f _,_ _g,_ _h) (by MP, from par(e, f_ _,_ _g,_ _h), par(e,_ _h,_ _g, f_ ) using axiom defparallelogram2; instantiation: A _e, B_ _f_, C _g,_
_7→_ _7→_ _7→_
_D_ _h)_
_7→_
8. Proved by assumption! (by QEDas)
**7.3** **Problem 3: Second Inverse Problem**
The list of axioms used for the second inverse problem (Problem 3) is the same as in section 7.1, extended
with the following axioms.
```
fof(defmidpoint,axiom, (! [A,B,C] : ((midpoint(A,B,C)) => ((betS(A,B,C) & cong(A,B,B,C)))
))).
fof(defmidpoint2,axiom, (! [A,B,C] : ((betS(A,B,C) & cong(A,B,B,C)) => ((midpoint(A,B,C))
)))).
fof(midpoint_NC, axiom, (! [A, B, I] : ((midpoint(A,I,B) & (A != B)) => ( (A != I) & ( B
!= I))))).
fof(defrectangle,axiom, (! [A,B,C,D] : ((rectangle(A,B,C,D)) => ((pG(A,B,C,D) & per(A,B,C
) & per(B,C,D) & per(C,D,A) & per(D,A,B)))))).
fof(defrectangle2a,axiom, (! [A,B,C,D] : ((pG(A,B,C,D) & per(A,B,C)) => rectangle(A,B,C,D
)))).
fof(defrectangle2b,axiom, (! [A,B,C,D] : ((pG(A,B,C,D) & per(B,C,D)) => rectangle(A,B,C,D
)))).
fof(defrectangle2c,axiom, (! [A,B,C,D] : ((pG(A,B,C,D) & per(C,D,A)) => rectangle(A,B,C,D
)))).
fof(defrectangle2d,axiom, (! [A,B,C,D] : ((pG(A,B,C,D) & per(D,A,B)) => rectangle(A,B,C,D
)))).
fof(defrectangle2e,axiom, (! [A,B,C,D] : ((per(A,B,C) & per(B,C,D) & per(C,D,A) & per(D,A
,B)) => rectangle(A,B,C,D)))).
%fof(defrectangle3a,axiom, (! [A,B,C,D] : (? [X] : ((rectangle(A,B,C,D)) => cong(A,C,B,D)
& midpoint(A,X,C) & midpoint(B,X,D))))).
fof(defrectangle3b,axiom, (! [A,B,C,D,X] : ((cong(A,C,B,D) & midpoint(A,X,C) & midpoint(B
,X,D)) => rectangle(A,B,C,D)))).
fof(defrectangle4a,axiom, (! [A,B,C,D] : ((rectangle(A,B,C,D)) => (pG(A,B,C,D) & cong(A,C
,B,D))))).
fof(defrectangle4b,axiom, (! [A,B,C,D] : ((pG(A,B,C,D) & cong(A,C,B,D)) => rectangle(A,B,
C,D)))).
fof(lemma_8_2,axiom, (! [A,B,C] : ((per(A,B,C)) => ((per(C,B,A)))))).
fof(varignon_th,axiom,(! [A,B,C,D,E,F,G,H] : (( (~(col(B,D,A))) & (~(col(B,D,C))) & (~(
col(A,C,B))) & (~(col(A,C,D))) & (~ (col(G,F,E))) & (B != D) & (A != C) & midpoint(A,
E,B) & midpoint(B,F,C) & midpoint(C,G,D) & midpoint(A,H,D)) => pG(E,F,G,H) ))).
```
-----
The conjecture is also different – the goal is to find under which assumption the quadrilateral EFGH
is a rectangle.
```
fof(th_varignon_rect,conjecture,(! [A,B,C,D,E,F,G,H] : (( (~(col(B,D,A))) & (~(col(B,D,C)
)) & (~(col(A,C,B))) & (~(col(A,C,D))) & (~ (col(G,F,E))) & (B != D) & (A != C) &
midpoint(A,E,B) & midpoint(B,F,C) & midpoint(C,G,D) & midpoint(A,H,D)) => rectangle(E
,F,G,H) ))).
```
If Larus is invoked as: ./larus -l100 -m8 -b1, it produces the following human-readable proof
in 14s (the abduct found is highlighted):
Consider arbitrary a, b, c, d, e, f , g, h such that:
- ¬col(b, _d,_ _a),_
- ¬col(b, _d,_ _c),_
- ¬col(a, _c,_ _b),_
- ¬col(a, _c,_ _d),_
- ¬col( _f_ _,_ _g,_ _e),_
- b ̸= d,
- a ̸= c,
- midpoint(a, _e,_ _b),_
- midpoint(b, f _,_ _c),_
- midpoint(c, _g,_ _d),_
- midpoint(a, _h,_ _d)._
It should be proved that rectangle(e, f _,_ _g,_ _h)._
Abducts found:
- cong(e, _g,_ _h, f_ )
1. midpoint(b, _e,_ _a) (by MP, from midpoint(a,_ _e,_ _b), midpoint(a,_ _e,_ _b) using axiom defmidpoint2; instantiation: A_ _b, B_ _e, C_
_7→_ _7→_
_a)_
_7→_
2. pG(e, f _,_ _g,_ _h) (by MP, from_ _col(b,_ _d,_ _a),_ _col(b,_ _d,_ _c),_ _col(a,_ _c,_ _b),_ _col(a,_ _c,_ _d),_ _col( f_ _,_ _g,_ _e), b_ = d, a = c, midpoint(b, _e,_ _a),_
_¬_ _¬_ _¬_ _¬_ _¬_ _̸_ _̸_
_midpoint(b, f_ _,_ _c), midpoint(c,_ _g,_ _d), midpoint(a,_ _h,_ _d) using axiom varignon th; instantiation: A 7→_ _a, B 7→_ _b, C 7→_ _c, D 7→_ _d, I 7→_ _e,_
_J_ _f_ , K _g, L_ _h)_
_7→_ _7→_ _7→_
3. rectangle(e, f _,_ _g,_ _h) (by MP, from pG(e, f_ _,_ _g,_ _h), cong(e,_ _g,_ _h, f_ ) using axiom defrectangle4b; instantiation: A _e, B_ _f_ , C
_7→_ _7→_
_g, D_ _h)_
_7→_ _7→_
4. rectangle(e, f _,_ _g,_ _h) (by MP, from rectangle(e, f_ _,_ _g,_ _h), rectangle(e, f_ _,_ _g,_ _h), rectangle(e, f_ _,_ _g,_ _h), rectangle(e, f_ _,_ _g,_ _h) using_
axiom defrectangle2e; instantiation: A _e, B_ _f_ , C _g, D_ _h)_
_7→_ _7→_ _7→_ _7→_
5. Proved by assumption! (by QEDas)
-----
**7.4** **Problem 4: Partially Specified Goal**
The list of axioms used for Problem 4 is the same as presented in Section 7.1. Only the conjecture is
different: the goal does not have the predicate symbol specified:
```
fof(th_varignon,conjecture,(! [A,B,C,D,E,F,G,H] : (( (~(col(B,D,A))) & (~(col(B,D,C))) &
(~(col(A,C,B))) & (~(col(A,C,D))) & (~ (col(E,F,G))) & (B != D) & (A != C) & midpoint
(A,E,B) & midpoint(B,F,C) & midpoint(C,G,D) & midpoint(A,H,D)) => _(E,F,G,H) ))).
```
If Larus is invoked as ./larus -l100 -m8, it produces the following human-readable proof (for the
goal par(e, f _,_ _g,_ _h), highlighted in the proof) in 2s:_
Consider arbitrary a, b, c, d, e, f, g, h such that:
- ¬col(b, _d,_ _a),_
- ¬col(b, _d,_ _c),_
- ¬col(a, _c,_ _b),_
- ¬col(a, _c,_ _d),_
- ¬col(e, f _,_ _g),_
- b ̸= d,
- a ̸= c,
- midpoint(a, _e,_ _b),_
- midpoint(b, f _,_ _c),_
- midpoint(c, _g,_ _d),_
- midpoint(a, _h,_ _d)._
It should be proved that (e, f _,_ _g,_ _h)._
1. par(a, _c,_ _e, f_ ) (by MP, from _col(a,_ _c,_ _b), midpoint(b, f_ _,_ _c), midpoint(a,_ _e,_ _b) using axiom triangle mid par strict; instantiation:_
_¬_
_A_ _a, B_ _c, C_ _b, P_ _f_, Q _e)_
_7→_ _7→_ _7→_ _7→_ _7→_
2. par(a, _c,_ _h,_ _g) (by MP, from_ _col(a,_ _c,_ _d), midpoint(c,_ _g,_ _d), midpoint(a,_ _h,_ _d) using axiom triangle mid par strict; instantiation:_
_¬_
_A_ _a, B_ _c, C_ _d, P_ _g, Q_ _h)_
_7→_ _7→_ _7→_ _7→_ _7→_
3. _par(e, f_ _,_ _g,_ _h)_ (by MP, from par(a, _c,_ _e, f_ ), par(a, _c,_ _h,_ _g),_ _col(e, f_ _,_ _g) using axiom lemma par trans; instantiation: A_
_¬_
_e, B_ _f_, C _a, D_ _c, E_ _g, F_ _h)_
_7→_ _7→_ _7→_ _7→_ _7→_ _7→_
4. Proved by assumption! (by QEDas)
**7.5** **Problem 5: Partially Specified Proof**
The list of axioms used for Problem 5 is as presented in Section 7.3 (with the axiom defrectangle3a
deleted). The conjecture is the same plus the abduct as an assumption, but we add the following hint:
-----
```
fof(hint1,hint,_,_,defrectangle4b(4,5,6,7)).
```
If Larus is invoked as ./larus -l100 -m8, it produces the same proof as in Section 7.3 in 4s, while if
the hint is omitted, it takes 5s.
-----
| [
"Salwa Tabet, Gonzalez",
"Predrag, Janičić",
"Julien, Narboux"
] | 2024-01-22T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2401.11898v1 | https://arxiv.org/abs/2401.11898 | https://www.semanticscholar.org/paper/d089fa745ee06ffaf075692d016c64d772e52b6b |
Automated Theorem Proving by HyperTree Proof Search with Retrieval-Augmented Tactic Generator | We propose an automated theorem prover (ATP) based on the HyperTree Proof Search (HTPS) enhanced by retrieval-augmented tactic generator (ReProver). | null | [
"Sho, Sonoda",
"Naoto, Onda",
"Kei, Tsukamoto",
"Wataru, Kumagai",
"Fumiya, Uchiyama",
"Akiyoshi, Sannnai"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
BBA: Bi-Modal Behavioral Alignment for Reasoning with Large Vision-Language Models | Multimodal reasoning stands as a pivotal capability for large vision-language models (LVLMs). The integration with Domain-Specific Languages (DSL), offering precise visual representations, equips these models with the opportunity to execute more accurate reasoning in complex and professional domains. However, the vanilla Chain-of-Thought (CoT) prompting method faces challenges in effectively leveraging the unique strengths of visual and DSL representations, primarily due to their differing reasoning mechanisms. Additionally, it often falls short in addressing critical steps in multi-step reasoning tasks. To mitigate these challenges, we introduce the Bi-Modal Behavioral Alignment (BBA) prompting method, designed to maximize the potential of DSL in augmenting complex multi-modal reasoning tasks. This method initiates by guiding LVLMs to create separate reasoning chains for visual and DSL representations. Subsequently, it aligns these chains by addressing any inconsistencies, thus achieving a cohesive integration of behaviors from different modalities. Our experiments demonstrate that BBA substantially improves the performance of GPT-4V(ision) on geometry problem solving (28.34% → 34.22%), chess positional advantage prediction (42.08% → 46.99%) and molecular property prediction (77.47% → 83.52%). | The BBA prompting method is introduced, designed to maximize the potential of DSL in augmenting complex multi-modal reasoning tasks, and substantially improves the performance of GPT-4V(ision) on geometry problem solving, chess positional advantage prediction and molecular property prediction. | # BBA: Bi-Modal Behavioral Alignment for Reasoning with Large Vision-Language Models
**Xueliang Zhao[♠][∗]** **Xinting Huang[⋆]** **Tingchen Fu[⋆]** **Qintong Li[♠]**
**Shansan Gong[♠]** **Lemao Liu[⋆]** **Wei Bi[⋆]** **Lingpeng Kong[♠]**
_♠The University of Hong Kong_
⋆Tencent AI Lab
[email protected]
**Abstract**
Multimodal reasoning stands as a pivotal
capability for large vision-language models
(LVLMs). The integration with DomainSpecific Languages (DSL), offering precise visual representations, equips these models with
the opportunity to execute more accurate reasoning in complex and professional domains.
However, the vanilla Chain-of-Thought (CoT)
prompting method faces challenges in effectively leveraging the unique strengths of visual and DSL representations, primarily due
to their differing reasoning mechanisms. Additionally, it often falls short in addressing critical steps in multi-step reasoning tasks. To
mitigate these challenges, we introduce the BiModal Behavioral Alignment (BBA) prompting
method, designed to maximize the potential of
DSL in augmenting complex multi-modal reasoning tasks. This method initiates by guiding
LVLMs to create separate reasoning chains for
visual and DSL representations. Subsequently,
it aligns these chains by addressing any inconsistencies, thus achieving a cohesive integration
of behaviors from different modalities. Our experiments demonstrate that BBA substantially
improves the performance of GPT-4V(ision)
on geometry problem solving (28.34% →
34.22%), chess positional advantage prediction
(42.08% → 46.99%) and molecular property
prediction (77.47% → 83.52%).
**1** **Introduction**
The use of domain-specific language (DSL) (Bowman and Hammerlindl, 2008; Edwards, 1994;
Weininger, 1988) aims to incorporate multimodal
information by providing a precise and unequivocal
alternative form using text.[1] Its application has significantly improved the multimodal reasoning ca
_∗_ This work was done during an internship at Tencent AI
Lab.
1Figure 2 illustrates an example of a DSL tailored for the
geometry domain. Further instances of DSLs can be found in
Appendix D.
pability, yielding notable improvements in intricate
contexts, especially within specialized domains
such as symbolic reinforcement learning (McGrath
et al., 2022; Zahavy et al., 2023; Ruoss et al., 2024)
and diverse scientific fields (Winter et al., 2022).
Multimodal reasoning is a fundamental
capability for large vision-language models
(LVLMs) (OpenAI, 2023; Yang et al., 2023b),
crucial for many of their applications. Despite
the considerable progress made by LVLMs in
multimodal tasks (Lu et al., 2023; Hu et al., 2023),
effectively utilizing them for complex multimodal
reasoning, particularly in conjunction with DSLs,
remains underexplored. The most direct approach
is to feed the LVLMs with both visual data (e.g.,
images) and its corresponding DSL representation
along with the textual queries. They are then
guided through the Chain-of-Thought (CoT) (Wei
et al., 2023) prompting to process step-by-step
reasoning. However, a significant issue with this
approach is that the reasoning processes derived
from different modalities are often inconsistent,
or even conflicting. This inconsistency limits
the ability of LVLMs to effectively integrate the
strengths of visual and DSL representations (§2.1).
Moreover, these models encounter difficulties
in executing multi-step reasoning (Wu et al.,
2023a; Liu and Chen, 2023), which hampers their
effectiveness in addressing critical steps within
complex problems (§2.2).
To address these challenges, we propose a BiModal Behavioral Alignment (BBA) prompting
method that adeptly integrates DSL into complex multimodal reasoning tasks. BBA begins by
prompting LVLMs to generate distinct reasoning
chains from both visual and DSL representations,
and then aligns these chains by resolving inconsistencies, thereby harmonizing the behaviors elicited
from various modalities. BBA offers two primary
advantages. Firstly, it adopts a “late fusion” strategy (Ghanem et al., 2018; Owens and Efros, 2018),
7255
-----
70
60
50
40
30
20
Avg. # of Tokens per Critical Step10
0
CoTd CoTv CoTm BBA
Methods
Spatial
Manipulation
Quantitative
Analysis
40
30
20
10
Propositional
Reasoning
Algebraic
Manipulation
CoTd
CoTv Logical
CoTm Deduction
BBA
Figure 1: Comparative analyses of different methods in problem-solving and critical step detailing. Left: Problemsolving rates across diverse problem types, where CoTd and CoTv refer to Chain-of-Thought prompting with DSL
and image inputs, respectively, and CoTm represents the approach combining both inputs. Right: Average number
of tokens per critical step across different methods.
effectively maintaining the inherent strengths of
both the direct vision input and the DSL representation. Secondly, BBA turns the inconsistency across
modalities into a beneficial signal that aids in identifying critical steps within reasoning processes.
By revealing where the reasoning chains differ, it
efficiently allocates more intermediate tokens to
these critical steps by resolving the inconsistencies
found.
We evaluate BBA on three multimodal reasoning tasks: geometry problem-solving, chess positional advantage prediction, and molecular property prediction. In these diverse applications, BBA
demonstrated notable relative improvements, with
respective performance improvements of 14.26%,
10.25%, and 6.30%.
**2** **Pilot Study**
In this study, we compare three variants of CoT
prompting within domains where DSL is available.
These variations include: (1) CoTv, which utilizes
only images for grounding responses to queries;
(2) CoTd, which relies exclusively on DSL representations for grounding; and (3) CoTm, which
integrates both images and DSL representations.
We focus on a selection of mathematical geometry
problems from the MATH benchmark (Hendrycks
et al., 2021), comprising a total of 187 problems
that incorporate image inputs. We then explore the
difficulties associated with performing multi-modal
reasoning using both images and DSL representa
tions, through an empirical examination of distinct
success rates across various problem types and the
allocation of tokens for critical reasoning steps.
**2.1** **Performance on Fine-grained Types**
Our analysis begins with an assessment of the performance of different models on fine-grained problem types. To this end, we categorically divide the
geometry problems based on the primary skills required for resolution, resulting in five categories:
(1) Spatial Manipulation, (2) Propositional Reasoning, (3) Logical Deduction, (4) Algebraic Manipulation, and (5) Quantitative Analysis. Additional details on the categorization annotation can
be found in Appendix A.1. We proceed to calculate
and compare the problem-solving rates for each
category.
Figure 1 offers a visual comparison of the models’ performances across these categories. It is
evident that CoTv and CoTd exhibit significantly
different levels of effectiveness across these problem types. Specifically, CoTv shows superior performance in tasks involving spatial manipulation
and propositional reasoning, while CoTd excels
in logical deduction, algebraic manipulation, and
quantitative analysis. This variation in performance
can be attributed to the different reasoning mechanisms enabled by each modality. DSL representations provide detailed information (e.g., precise
coordinates) that support logic-oriented operations.
On the other hand, images provide intuitive visual
7256
-----
**3** **Preliminaries**
**3.1** **Problem Formulation**
This study focuses on multi-modal reasoning tasks,
specifically where the visual modality is represented as an image, coupled with a DSL that accurately depicts the image. Our objective is to predict
an answer to a given question q, associated with an
image v and a DSL representation d, adhering to
specific task requirements (e.g., solving mathematical problems).
The emergence of LVLMs has streamlined this
process. Owing to extensive pre-training on trillions of tokens, these models can accurately interpret various instructions and execute the corresponding tasks. In this paradigm, the model parameters are denoted by θ, and the answer ˆa is generated as ˆa = arg maxa p(a | q, v, d; θ), where the
inputs are reformulated into well-crafted prompts
using specific templates, designed to elicit the desired response from the LVLMs.
**3.2** **Chain-of-Thought Prompting**
Recently, chain-of-thought prompting has gained
recognition as an effective technique for enhancing the reasoning capabilities of language models (Wei et al., 2023). This method decomposes
the original task into two distinct phases: rationale generation and answer prediction. In the rationale generation phase, a rationale ˆr is derived
as ˆr = arg maxr p(r | q, v, d; θ), leveraging a
query augmented with an instruction designed to
initiate stepwise analytical thinking (Kojima et al.,
2022)). Subsequently, the answer is often deduced
directly from the rationale, utilizing heuristic stringmatching methods for precise identification.
**4** **Method**
This work aims to tackle two primary challenges
in multi-modal reasoning: (1) the integration of
the inherent strengths of both visual and DSL representations, and (2) the identification and resolution of critical steps within these tasks. To address
these challenges, we introduce the BBA prompting method, an innovative approach that seeks to
unleash the power of DSL in enhancing complex
multi-modal reasoning tasks. Figure 2 offers an
overview of our proposed methodology. BBA initiates by employing LVLMs to generate reasoning
chains separately from visual and DSL inputs. Subsequently, these chains proceed through an alignment phase, wherein inconsistencies are identified
cues that are more conducive to spatial reasoning
tasks. Despite the concurrent use of images and
DSL representations, CoTm does not demonstrate
uniform improvements across all problem types, indicating the challenge of aligning reasoning mechanisms across modalities. In §4, we elaborate on
BBA, which initiates by independently deriving
reasoning chains from images and DSL representations, and then aligning these chains by resolving
any inconsistencies between them. Unlike CoTm,
BBA effectively capitalizes on the strengths of
both modalities, achieving comprehensive improvements across all identified problem categories.
**2.2** **Token Allocation for Critical Steps**
In light of recent theoretical advances (Feng et al.,
2023; Merrill and Sabharwal, 2023) indicating the
effective allocation of intermediate tokens as pivotal for unlocking the expressive power of models
in sequential reasoning tasks, we delve into the
allocation of intermediate tokens for addressing
critical steps in problem-solving. A critical step in
solving mathematical problems is defined as the
point at which an essential insight, decision, or
application of a method is crucial for obtaining
the correct solution, typically involving a significant conceptual leap, strategic theorem application,
or key calculation that influences the subsequent
problem-solving process. For each problem, we
identify all critical steps, categorizing each step
in the generated solution as either corresponding
to one of the identified critical steps or not, and
then sum the tokens for steps within a generated
solution that are associated with the same critical
step. Details on the annotation of critical steps are
provided in Appendix A.2.
Figure 1 demonstrates that merely combining
images and DSL representations in inputs is insufficient for effectively allocating more tokens to
critical steps, thus reducing the expressive power of
LLMs and leading to inferior overall performance
(as discussed in §5.4). We hypothesize that this
limitation arises from the current inefficiencies of
LLMs in exploring the solution space for complex
problems (Yang et al., 2023b), resulting in their
struggle to accurately identify critical steps. As
will be discussed in §4.2, BBA is more effective in
discerning and addressing critical steps by uncovering and reconciling discrepancies among reasoning
chains derived from different modalities.
7257
-----
|Col1|Rationale (𝑟 ̂): … the area of the pentagon is \boxed{20}.|
|---|---|
Image:
Rationale (𝑟![):]
… by drawing diagonals between
BE and CE, the polygon is divided 𝑆= 𝑆!"#$ + 𝑆!%"$ + 𝑆!&%$
into ...,
… with triangle CDE having an area of 8 …… 𝑆= 𝑆!"#$ + 𝑆&%"$
Question: Rationale (𝑟 ̂):
Find the area of the irregular pentagon. Diagnostic … the area of the pentagon is \boxed{20}.
Domain-specific Language: Rationale (𝑟"): 𝑆= $1 2 ' CD ' DE →𝑆= $1 2 ×4×4 Alignment
… the polygon consists of the
import graph; pair A = (0, 0); pair B = (1, 2); quadrangle ABCE and the triangle CDE …, 𝑆= $1 2 ' CE ' h →𝑆= $1 2 ×6×2
pair C = (4, 4); pair D = (6, 1); … the area of CDE is 6 …
pair E = (4, -2);
draw(A--B--C--D--E--cycle, red + 1.5bp);
…
Figure 2: An instantiation of the proposed BBA method.
and reconciled, ensuring the harmonization of behaviors derived from each modality.
**Road Map.** The rest of this section is structured
as follows: We begin by detailing the process of
eliciting reasoning chains from both vision and
DSL representations in §4.1. This is followed
by an elaboration on diagnosing and rectifying inconsistencies between these reasoning chains and
the methods of aligning behaviors from different
modalities in §4.2. Lastly, in §4.3, we detail how
BBA effectively identifies and addresses critical
steps in the reasoning process.
**4.1** **Bi-Modal Behavior Eliciting**
The objective of this phase is to effectively harness
the unique strengths of vision and DSL representations in answering a given question. Unlike vanilla
CoT prompting, which intermingles the reasoning
processes of these two modalities, BBA seeks to
elicit reasoning chains from each modality independently. This approach allows the vision-based
reasoning chain to deliver more credible steps in intuitive and spatial reasoning, while the DSL-based
reasoning chain provides steps with greater reliability in precise computation. The formal definition
of this process is as follows:
_rv = arg maxr_ _p(r | q, v; θ)_ (1)
_rd = arg maxr_ _p(r | q, d; θ)._
where rv and rd represent the reasoning chains
derived from the vision and DSL representations,
respectively.
**4.2** **Behavior Alignment**
This phase is centered on aligning the reasoning
chains from different modalities to capitalize on
the best of both worlds in multi-modal reasoning.
We initiate this process with diagnostic checks to
uncover inconsistencies between the chains, including variances in intermediate steps and the final answers. Following this, an aligned reasoning chain
is created by addressing the discrepancies identified in the diagnostics. When different methods
produce conflicting results, it often indicates an error in at least one approach. The divergence point
then becomes a crucial indicator of where deeper
understanding or more meticulous application of
principles is necessary. The model is subsequently
instructed to thoroughly examine the derivations
from both modalities and ascertain accurate conclusions. The diagnostic results are formally obtained
as follows:
_rinc = arg maxr_ _p(r | rv, rd; θ),_ (2)
where rinc denotes the rationale for inconsistencies
identified during the diagnostic process. Next, the
formation of the aligned reasoning chain is defined
as:
_rˆ = arg maxr_ _p(r | rv, rd, rinc; θ)_ (3)
where the final rationale ˆr includes the definitive
answer a within special tokens.
**4.3** **Discussion**
The strengths of BBA can be mainly attributed to
its capability to address critical steps in multi-step
reasoning problems. BBA excels in addressing critical steps primarily due to two reasons: (1) the
critical step is more easily identified by contrasting different solutions, revealing their divergences;
and (2) learning from these differences allows for a
more efficient allocation of intermediate tokens to
these critical steps. Drawing from cognitive learning principles observed in humans, it is a plausible
7258
-----
extrapolation that identifying and rectifying disparities between various methods fosters a deeper comprehension of essential aspects of a problem (Munzar et al., 2021). Furthermore, encountering and
acknowledging mistakes enhances the reasoning
process, paralleling human problem-solving strategies. This not only deepens the understanding but
also facilitates the allocation of additional reasoning tokens, thereby amplifying the model’s capacity
to resolve critical steps (Feng et al., 2023; Merrill
and Sabharwal, 2023).
**5** **Experiments**
**5.1** **Datasets and Evaluation**
We assess the efficacy of BBA across three multimodal reasoning tasks spanning distinct domains:
geometry problem-solving, chess positional advantage prediction, and molecular property prediction.
**Geometry Problem-Solving.** This task involves
predicting a free-form solution to a given geometry problem. We utilize the geometry subset of
the MATH benchmark (Hendrycks et al., 2021)
for this task, selecting only those problems that include Asymptote code (Bowman and Hammerlindl,
2008), a domain-specific language (DSL) used for
depicting geometric figures. This process resulted
in a dataset of 187 problems, which we refer to as
**G-MATH. The official evaluation script from the**
MATH benchmark is employed to compute accuracy by comparing the predicted answers with the
correct answers.
**Chess Positional Advantage Prediction.** The
objective in chess positional advantage prediction
is to classify a given chessboard state as being advantageous for White, advantageous for Black, or
balanced. This task evaluates the model’s capacity
to correlate with the actual value of a chessboard
state, determined by chess engines after extensive
analysis. For evaluation, we compiled a dataset
of 183 game snippets, applying Stockfish 15 at a
search depth of 18 to assess the winning probability for the white pieces. We classified the winning
probabilities into three intervals: 0–33% indicating
an advantage for Black, 34–66% denoting a balanced state, and 67–100% suggesting an advantage
for White. We refer to this dataset as ChessAdv,
employing Forsyth-Edwards Notation (FEN) (Edwards, 1994) as the DSL for this domain. Classification accuracy serves as the evaluation metric.
**Molecular** **Property** **Prediction.** Molecular
property prediction focuses on determining
whether a molecule exhibits a certain property
based on its molecular graph. The MUTAG benchmark dataset (Debnath et al., 1991) is used for this
purpose, comprising 188 chemical compounds categorized into two classes based on their mutagenic
effects on a bacterium. The Simplified MolecularInput Line-Entry System (SMILES) (Weininger,
1988) is utilized as the DSL in this domain, with
classification accuracy as the metric for evaluation.
**5.2** **Baselines**
For comparative evaluation, we adopt the following
baselines:
**DSL or Visual-Only Methods.** (1) CoTv: Implements chain-of-thought prompting (Wei et al.,
2023), omitting DSL representations and relying
solely on images; (2) CoTd: Utilizes chain-ofthought prompting, excluding images to focus exclusively on DSL representations; (3) Plan-and**Solve: Formulates a plan to segment the overall**
task into manageable subtasks for sequential execution (Wang et al., 2023a); and (4) Least-to-Most:
Breaks complex problems into simpler, sequential subproblems, leveraging solutions of preceding subproblems to facilitate solving subsequent
ones (Zhou et al., 2022).
**Integrated DSL and Visual Methods.** (1)
**CoTm: Employs chain-of-thought prompting us-**
ing a combination of both DSL representations and
images; (2) CCoT: enhances compositional reasoning by integrating visual and DSL inputs, substituting the scene graph with DSL for fair comparison (Mitra et al., 2023); (3) DDCoT: Introduces
negative-space prompting and multimodal reasoning by dividing cognitive tasks between reasoning
and recognition, enhancing reasoning with visual
recognition capabilities (Zheng et al., 2023).
All baseline methods, alongside BBA, are implemented on GPT-4V(ision) (OpenAI, 2023), utilizing the gpt-4-vision-preview version to
ensure a fair and consistent comparison.
**5.3** **Implementation Details**
For geometry problem-solving and chess positional
advantage prediction, we employ zero-shot prompting. In the case of molecular property prediction,
we augment the instruction with four <SMILES,
category> pairs, given the challenge this specialized task presents to the GPT-4V(ision). It is cru
7259
-----
Methods With DSL With Figure **G-MATH** **ChessAdv** **MUTAG** Avg.
CoTv (Wei et al., 2023) ✗ ✓ 23.53 40.98 75.82 46.56
CoTd (Wei et al., 2023) ✓ ✗ 23.12 38.80 76.92 46.01
Plan-and-Solve (Wang et al., 2023a) ✓ ✗ 25.67 42.62 78.57 48.73
Least-to-Most (Zhou et al., 2022) ✓ ✗ 25.13 38.25 73.63 45.47
CoTm (Wei et al., 2023) ✓ ✓ 28.34 42.08 77.47 49.09
CCoT (Mitra et al., 2023) ✓ ✓ 26.74 39.34 68.68 44.75
DDCoT (Zheng et al., 2023) ✓ ✓ 29.95 37.70 73.08 46.74
BBA (Ours) ✓ ✓ **34.22** **46.99** **83.52** **54.71**
Table 1: Evaluation results for geometry problem-solving (G-MATH), chess positional advantage prediction
(ChessAdv), and molecular property prediction (MUTAG), including average performance. Numbers in bold
denote the best performance.
Methods With DSL With Figure **G-MATH** **ChessAdv** **MUTAG** Avg.
BBA (Ours) ✓ ✓ **34.22** **46.99** **83.52** **54.71**
-diagnostic ✓ ✓ 32.09 41.53 78.57 50.54
-visual ✓ ✗ 28.34 37.70 61.54 42.39
-dsl ✗ ✓ 27.27 36.07 75.82 46.20
Table 2: Ablation study results with best performances highlighted in bold.
cial to note that these SMILES representations
are excluded from the test cases to prevent data
leakage. Detailed instructions for these tasks
can be found in Appendix B. To interact with
the gpt-4-vision-preview, the temperature
and top_p are set to 0 and 1, respectively, to ensure
deterministic outputs, while the max_tokens parameter is capped at 2048.
**5.4** **Main Results**
The results of our experiments, presented in Table 1, reveal several key observations: (1) BBA
surpasses all compared baseline methods, achieving relative improvements of 14.26%, 10.25%, and
6.30% in geometry problem-solving, chess positional advantage prediction, and molecular property prediction, respectively. This superior performance can be attributed to BBA’s adeptness at
leveraging the combined strengths of both visual
and DSL representations, along with its capacity to
pinpoint and address critical steps; (2) The integration of DSL and visual information proves advantageous for multi-modal reasoning tasks. Our results
demonstrate that CoTm achieves the second-best
average performance, notably excelling in geometry problem-solving. This task benefits markedly
from the complementary insights provided by DSL
and visual inputs, indicating the value of integrating these modalities; and (3) The process of effectively merging DSL representations with visual
data poses a significant challenge, as evidenced by
the subpar performance of CCoT.
**6** **Analysis**
**6.1** **Ablation Study**
This ablation study evaluates four variants of our
model across three datasets, as shown in Table 2.
These variants comprise the full method and three
variants: one without the diagnostic check (“diagnostic”), where the reasoning process is solely
based on divergent reasoning chains from different
modalities without any verification; one lacking
image inputs (“-visual”), where the model’s assessment of reasoning chains relies exclusively on the
DSL representation and its intrinsic knowledge;
and one excluding DSL inputs (“-dsl”), where the
evaluation of reasoning chains depends solely on
visual information and the model’s inherent understanding.
The results demonstrate that our full method outperforms all variants on the datasets, indicating
the crucial role of combining DSL and visual inputs alongside diagnostic checks for identifying
discrepancies and enhancing problem-solving in
critical steps. Notably, the exclusion of visual inputs results in the most significant performance
drop, highlighting the vital contribution of images
to the efficacy of multi-modal reasoning tasks.
7260
-----
Level 1 Level 2 Level 3 Level 4 Level 5 Avg.
BBA (Ours) **71.43** **53.13** **44.12** 16.98 **17.02** **34.22**
CoTm 61.90 37.50 29.41 **24.53** 10.64 28.34
CoTv 52.38 37.50 26.47 13.21 10.64 23.53
CoTd 47.62 50.00 29.41 7.69 6.38 23.12
Table 3: Evaluation results on the geometry problem-solving task. Numbers in bold indicate the best performance.
Level 1 Level 2 Level 3 Avg.
BBA (Ours) **57.41** **43.21** **41.67** **46.99**
CoTm 51.85 37.04 39.58 42.08
CoTv 48.15 38.27 37.50 40.98
CoTd 46.30 33.33 39.58 38.80
Table 4: Evaluation results on the chess positional advantage prediction task. Numbers in bold indicate the best
performance.
**6.2** **Analysis on Different Complexities**
This experiment delves into how BBA performs
under varying problem complexities, comparing
it with three variants of chain-of-thought prompting. Our focus is on geometry problem-solving and
chess positional advantage prediction due to the
labor-intensive nature of assessing the difficulty
of molecular graphs. For geometry, we utilize
the difficulty levels outlined by the MATH benchmark (Hendrycks et al., 2021), and for chess, we
classify problems into three difficulty levels based
on the centipawns returned by Stockfish 15.
Table 3 and Table 4 present the results. BBA
consistently outperforms competitors across nearly
all difficulty levels, except level 4 in geometry problem-solving. Integrating DSL and image inputs proves advantageous, as CoTm typically surpasses the performance of both CoTv
and CoTd. However, achieving universal improvements through direct integration presents a significant challenge (as discussed in §2.1). In geometry
problem-solving, DSL representations are particularly effective in simpler problems, but this advantage diminishes with increased complexity. We hypothesize this is due to the lengthening of Asymptote code in more complex problems. For instance,
the average Asymptote code length is 186.89 for
levels 1 to 3, but increases to 217.80 for levels 4
to 5, whereas the length of FEN notation remains
relatively stable across different levels of difficulty.
**6.3** **Comparison with Self-Refine Prompting**
This experiment explores the efficacy of self-refine
prompting (Madaan et al., 2023), a technique
that improves previous outputs through iterative
feedback and refinement, as a potential substitute
for the diagnostic check and alignment phases in
BBA. We have adapted the conventional self-refine
prompting approach to accommodate both DSL
and image inputs, while preserving the original implementation details to the greatest extent. This
experiment evaluates three versions of self-refine
prompting, denoted as Self-Refine (x turns), with
_x −_ 1 indicating the count of refinement cycles and
_x varying from 2 to 4._
Table 5 presents the results. The findings reveal
that BBA consistently surpasses the various versions of self-refine prompting. This indicates the
superiority of directing LVLMs to pinpoint inconsistencies between divergent solutions over merely
generating feedback based on the knowledge embedded within their parameters. Moreover, recent
work (Huang et al., 2023) corroborates our findings,
demonstrating that LLMs frequently encounter difficulties in adjusting their responses based solely
on their inherent capabilities. This is further validated by our results, which indicate a decline in
the performance of the self-refine prompting as the
number of refinement iterations increases.
**6.4** **Case Study**
Due to space constraints, the case study is included
in Appendix D.
7261
-----
Methods **G-MATH** **ChessAdv** **MUTAG** Avg.
BBA (Ours) **34.22** **46.99** **83.52** **54.71**
Self-Refine (2 turns) 30.48 43.17 73.63 48.91
Self-Refine (3 turns) 28.34 42.08 71.98 47.28
Self-Refine (4 turns) 28.88 38.80 68.68 45.29
Table 5: Comparative analysis of BBA versus Self-Refine prompting. Numbers in bold denote the best performance.
**7** **Related Work**
**7.1** **Multi-Modal CoT Prompting**
An advanced methodology for zero-shot image reasoning leverages CoT prompting, a technique that
breaks down complex tasks into simpler, sequential thought processes to simulate human reasoning (Lu et al., 2022; Zhang et al., 2023b; Wang
et al., 2023c). Due to the structural differences between LVLMs and LLMs, additional improvements
have been made to adapt CoT for wider applications. To illustrate, QVix (Yang et al., 2023a) leverages LLMs’ linguistic skills to enhance LVLMs’
visual content analysis; V[∗] (Wu and Xie, 2023)
enhances the precise targeting of specific visual
elements; Wu et al. (2023b) address CoT prompting’s limitations by adopting a “Description then
Decision” strategy for complex vision-linguistic
tasks; CoCoT (Zhang et al., 2024) uses a contrastive CoT approach for multiple image inputs;
ViLa (Hu et al., 2023) merges perceptual data with
CoT for physically-grounded task planning; and
DDCoT (Zheng et al., 2023) assigns tasks to relevant components, differentiating reasoning and
recognition roles and integrating visual recognition
into the reasoning process. Despite these advancements, the strategic use of prompting mechanisms
to seamlessly integrate DSLs into LVLMs presents
an untapped potential, a gap our research aims to
bridge by pioneering in this specific area.
**7.2** **Multiple Chains Prompting**
Following the progress of the chain-of-thought
prompting, a series of efforts have been made
to enhance factuality by generating multiple
reasoning chains. Building on this progress,
the research focuses on three main approaches:
self-consistency (Wang et al., 2022a), selfrefinement (Madaan et al., 2023; Shinn et al., 2023;
Chen et al., 2023b), and multi-agent debate (Du
et al., 2023; Liang et al., 2023; Xiong et al., 2023).
Self-consistency (Wang et al., 2022b) involves a
method where various reasoning paths are first generated, and then the most consistent answer is selected through a process akin to majority voting.
Self-refinement (Madaan et al., 2023) leverages the
inherent capabilities of LLMs to generate feedback
for previous outputs, refining them based on this
feedback. However, recent research (Huang et al.,
2023) indicates that LLMs face challenges in providing accurate feedback independently, suggesting that feedback from external environments (First
et al., 2023) is a more effective alternative. Multiagent debate (Du et al., 2023) aims to replicate
real-world debate scenarios, fostering a consensus
by incorporating outputs from previous iterations
in each debate cycle. These methods, while innovative, have yet to fully address the need for identifying intermediate inconsistencies between multiple chains which play a crucial role in pinpointing
the critical steps necessary for solving complex
tasks. Moreover, the requirement for multiple invocations of LLMs, particularly with proprietary
LVLMs (OpenAI, 2023), significantly increases the
associated costs.
We provide a detailed review of the literature on
large vision-language models in Appendix C.
**8** **Conclusion**
In conclusion, our work introduces the Bi-Modal
Behavioral Alignment (BBA) prompting method, a
novel approach that significantly enhances the multimodal reasoning capabilities of GPT-4V(ision)
by integrating DSL. By generating and aligning
separate reasoning chains for visual and DSL representations, BBA addresses the challenges of inconsistent reasoning mechanisms and the execution of multi-step reasoning tasks. Our experiments across diverse domains, including geometry
problem-solving, chess positional advantage prediction, and molecular property prediction, demonstrate the effectiveness of BBA, showcasing notable
improvements in performance.
7262
-----
**Ethical Considerations**
In adherence to the established Code of Ethics, this
work exclusively employs publicly accessible data
and information, ensuring no private or confidential
resources are utilized.
**Limitations**
BBA marks a significant advancement in the field
of multi-modal reasoning, incorporating DSLs. Despite this, it is beneficial to address several limitations to fully exploit its capabilities:
(1) BBA demonstrates significant improvements
in three distinct domains: geometry, chess, and
molecular biology. Yet, its application in other areas, especially those without custom DSLs, has not
been extensively explored. Adapting BBA by substituting DSL representations with alternative, advanced representations, such as scene graphs (Yang
et al., 2018), could be advantageous. These alternatives, although less precise and informative in
capturing image nuances, offer a valuable research
direction.
(2) The primary aim of this work is to develop a
prompting method that complements, but is distinct
from, other advanced technologies which respond
to environmental feedback (Yao et al., 2022; Xie
et al., 2023). Integrating and responding to environmental feedback to develop a more adaptive
and intelligent agent presents an intriguing future
research direction.
**Acknowledgements**
We extend our gratitude to the HKU NLP group
and the anonymous reviewers for their invaluable suggestions, which significantly enhanced
this work. This work is partially supported by
the joint research scheme of the National Natural
Science Foundation of China (NSFC) and the Research Grants Council (RGC) under grant number
N_HKU714/21.
**References**
Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc,
Antoine Miech, Iain Barr, Yana Hasson, Karel
Lenc, Arthur Mensch, Katherine Millican, Malcolm
Reynolds, et al. 2022. Flamingo: a visual language
model for few-shot learning. Advances in Neural
_Information Processing Systems, 35:23716–23736._
Anas Awadalla, Irena Gao, Josh Gardner, Jack Hessel, Yusuf Hanafy, Wanrong Zhu, Kalyani Marathe,
Yonatan Bitton, Samir Gadre, Shiori Sagawa, et al.
2023. Openflamingo: An open-source framework for
training large autoregressive vision-language models.
_arXiv preprint arXiv:2308.01390._
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang,
Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou,
and Jingren Zhou. 2023. Qwen-vl: A frontier large
vision-language model with versatile abilities. arXiv
_preprint arXiv:2308.12966._
John C Bowman and Andy Hammerlindl. 2008. Asymptote: A vector graphics language. TUGboat: The
_Communications of the TEX Users Group, 29(2):288–_
294.
Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang,
Feng Zhu, and Rui Zhao. 2023a. Shikra: Unleashing
multimodal llm’s referential dialogue magic. arXiv
_preprint arXiv:2306.15195._
Xinyun Chen, Maxwell Lin, Nathanael Schärli, and
Denny Zhou. 2023b. Teaching large language models to self-debug. arXiv preprint arXiv:2304.05128.
W Dai, J Li, D Li, AMH Tiong, J Zhao, W Wang,
B Li, P Fung, and S Hoi. 2023. Instructblip:
Towards general-purpose vision-language models
with instruction tuning. arxiv 2023. arXiv preprint
_arXiv:2305.06500, 2._
Asim Kumar Debnath, Rosa L Lopez de Compadre,
Gargi Debnath, Alan J Shusterman, and Corwin
Hansch. 1991. Structure-activity relationship of
mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies
and hydrophobicity. Journal of medicinal chemistry,
34(2):786–797.
Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. 2023. Improving factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325.
Steven J Edwards. 1994. Portable game notation specification and implementation guide. Retrieved April,
4:2011.
Guhao Feng, Yuntian Gu, Bohang Zhang, Haotian Ye,
Di He, and Liwei Wang. 2023. Towards revealing
the mystery behind chain of thought: a theoretical
perspective. arXiv preprint arXiv:2305.15408.
Emily First, Markus N Rabe, Talia Ringer, and Yuriy
Brun. 2023. Baldur: Whole-proof generation and
repair with large language models. arXiv preprint
_arXiv:2303.04910._
Peng Gao, Jiaming Han, Renrui Zhang, Ziyi Lin, Shijie
Geng, Aojun Zhou, Wei Zhang, Pan Lu, Conghui
He, Xiangyu Yue, et al. 2023. Llama-adapter v2:
Parameter-efficient visual instruction model. arXiv
_preprint arXiv:2304.15010._
7263
-----
Bernard Ghanem, Juan Carlos Niebles, Cees Snoek,
Fabian Caba Heilbron, Humam Alwassel, Victor
Escorcia, Ranjay Krishna, Shyamal Buch, and
Cuong Duc Dao. 2018. The activitynet large-scale
activity recognition challenge 2018 summary. arXiv
_preprint arXiv:1808.03766._
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. 2021. Measuring mathematical problem solving with the math dataset. arXiv preprint
_arXiv:2103.03874._
Yingdong Hu, Fanqi Lin, Tong Zhang, Li Yi, and Yang
Gao. 2023. Look before you leap: Unveiling the
power of gpt-4v in robotic vision-language planning.
_arXiv preprint arXiv:2311.17842._
Jie Huang, Xinyun Chen, Swaroop Mishra,
Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, and Denny Zhou. 2023. Large language
models cannot self-correct reasoning yet. _arXiv_
_preprint arXiv:2310.01798._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in
_neural information processing systems, 35:22199–_
22213.
Bo Li, Yuanhan Zhang, Liangyu Chen, Jinghao Wang,
Jingkang Yang, and Ziwei Liu. 2023a. Otter: A
multi-modal model with in-context instruction tuning.
_arXiv preprint arXiv:2305.03726._
Juncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Hanwang Zhang, Wei Ji, Wenqiao Zhang, Tat-Seng Chua,
Siliang Tang, and Yueting Zhuang. 2023b. Finetuning multimodal llms to follow zeroshot demonstrative instructions. arXiv preprint arXiv:2308.04152,
3.
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi.
2023c. Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models. arXiv preprint arXiv:2301.12597.
Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang,
Yan Wang, Rui Wang, Yujiu Yang, Zhaopeng Tu, and
Shuming Shi. 2023. Encouraging divergent thinking
in large language models through multi-agent debate.
_arXiv preprint arXiv:2305.19118._
Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae
Lee. 2023a. Improved baselines with visual instruction tuning. arXiv preprint arXiv:2310.03744.
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae
Lee. 2023b. Visual instruction tuning. arXiv preprint
_arXiv:2304.08485._
Mengchen Liu and Chongyan Chen. 2023. An evaluation of gpt-4v and gemini in online vqa. arXiv
_preprint arXiv:2312.10637._
Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, KaiWei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter
Clark, and Ashwin Kalyan. 2022. Learn to explain:
Multimodal reasoning via thought chains for science
question answering. Advances in Neural Information
_Processing Systems, 35:2507–2521._
Yujie Lu, Xiujun Li, William Yang Wang, and Yejin
Choi. 2023. Vim: Probing multimodal large language models for visual embedded instruction following. arXiv preprint arXiv:2311.17647.
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler
Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang,
et al. 2023. Self-refine: Iterative refinement with
self-feedback. arXiv preprint arXiv:2303.17651.
Thomas McGrath, Andrei Kapishnikov, Nenad Tomašev,
Adam Pearce, Martin Wattenberg, Demis Hassabis,
Been Kim, Ulrich Paquet, and Vladimir Kramnik.
2022. Acquisition of chess knowledge in alphazero.
_Proceedings of the National Academy of Sciences,_
119(47):e2206625119.
William Merrill and Ashish Sabharwal. 2023. The
expresssive power of transformers with chain of
thought. arXiv preprint arXiv:2310.07923.
Chancharik Mitra, Brandon Huang, Trevor Darrell, and
Roei Herzig. 2023. Compositional chain-of-thought
prompting for large multimodal models. _arXiv_
_preprint arXiv:2311.17076._
Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, and Ahmed
Awadallah. 2023. Orca: Progressive learning from
complex explanation traces of gpt-4. arXiv preprint
_arXiv:2306.02707._
Brendan Munzar, Krista R Muis, Courtney A Denton,
and Kelsey Losenno. 2021. Elementary students’
cognitive and affective responses to impasses during
mathematics problem solving. Journal of Educa_tional Psychology, 113(1):104._
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
Andrew Owens and Alexei A Efros. 2018. Audio-visual
scene analysis with self-supervised multisensory features. In Proceedings of the European conference on
_computer vision (ECCV), pages 631–648._
Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao,
Shaohan Huang, Shuming Ma, and Furu Wei.
2023. Kosmos-2: Grounding multimodal large
language models to the world. _arXiv preprint_
_arXiv:2306.14824._
Anian Ruoss, Grégoire Delétang, Sourabh Medapati, Jordi Grau-Moya, Li Kevin Wenliang, Elliot Catt, John Reid, and Tim Genewein. 2024.
Grandmaster-level chess without search. _arXiv_
_preprint arXiv:2402.04494._
7264
-----
Noah Shinn, Federico Cassano, Ashwin Gopinath,
Karthik R Narasimhan, and Shunyu Yao. 2023. Reflexion: Language agents with verbal reinforcement
learning. In Thirty-seventh Conference on Neural
_Information Processing Systems._
Gemini Team, Rohan Anil, Sebastian Borgeaud,
Yonghui Wu, Jean-Baptiste Alayrac, Jiahui Yu,
Radu Soricut, Johan Schalkwyk, Andrew M Dai,
Anja Hauth, et al. 2023. Gemini: a family of
highly capable multimodal models. arXiv preprint
_arXiv:2312.11805._
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint
_arXiv:2302.13971._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu,
Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim.
2023a. Plan-and-solve prompting: Improving zeroshot chain-of-thought reasoning by large language
models. arXiv preprint arXiv:2305.04091.
Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi
Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang,
Lei Zhao, Xixuan Song, et al. 2023b. Cogvlm: Visual expert for pretrained language models. arXiv
_preprint arXiv:2311.03079._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc
Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery,
and Denny Zhou. 2022a. Self-consistency improves
chain of thought reasoning in language models. arXiv
_preprint arXiv:2203.11171._
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2022b. Self-instruct: Aligning language model with self generated instructions. arXiv
_preprint arXiv:2212.10560._
Ziyue Wang, Chi Chen, Peng Li, and Yang Liu. 2023c.
Filling the image information gap for vqa: Prompting
large language models to proactively ask questions.
_arXiv preprint arXiv:2311.11598._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and
[Denny Zhou. 2023. Chain-of-thought prompting elic-](http://arxiv.org/abs/2201.11903)
[its reasoning in large language models.](http://arxiv.org/abs/2201.11903)
David Weininger. 1988. Smiles, a chemical language
and information system. 1. introduction to methodology and encoding rules. Journal of chemical infor_mation and computer sciences, 28(1):31–36._
Benedikt Winter, Clemens Winter, Johannes Schilling,
and André Bardow. 2022. A smile is all you need:
predicting limiting activity coefficients from smiles
with natural language processing. Digital Discovery,
1(6):859–869.
Penghao Wu and Saining Xie. 2023. V[∗]: Guided visual search as a core mechanism in multimodal llms.
_arXiv preprint arXiv:2312.14135._
Yang Wu, Shilong Wang, Hao Yang, Tian Zheng,
Hongbo Zhang, Yanyan Zhao, and Bing Qin. 2023a.
An early evaluation of gpt-4v (ision). arXiv preprint
_arXiv:2310.16534._
Yifan Wu, Pengchuan Zhang, Wenhan Xiong, Barlas
Oguz, James C Gee, and Yixin Nie. 2023b. The
role of chain-of-thought in complex vision-language
reasoning task. arXiv preprint arXiv:2311.09193.
Tianbao Xie, Fan Zhou, Zhoujun Cheng, Peng Shi, Luoxuan Weng, Yitao Liu, Toh Jing Hua, Junning Zhao,
Qian Liu, Che Liu, et al. 2023. Openagents: An
open platform for language agents in the wild. arXiv
_preprint arXiv:2310.10634._
Kai Xiong, Xiao Ding, Yixin Cao, Ting Liu, and Bing
Qin. 2023. Examining inter-consistency of large language models collaboration: An in-depth analysis via
debate. In Findings of the Association for Computa_tional Linguistics: EMNLP 2023, pages 7572–7590._
Jianwei Yang, Jiasen Lu, Stefan Lee, Dhruv Batra, and
Devi Parikh. 2018. Graph r-cnn for scene graph generation. In Proceedings of the European conference
_on computer vision (ECCV), pages 670–685._
Kaiwen Yang, Tao Shen, Xinmei Tian, Xiubo Geng,
Chongyang Tao, Dacheng Tao, and Tianyi Zhou.
2023a. Good questions help zero-shot image reasoning. arXiv preprint arXiv:2312.01598.
Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng
Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan
Wang. 2023b. The dawn of lmms: Preliminary
explorations with gpt-4v (ision). _arXiv preprint_
_arXiv:2309.17421, 9(1)._
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak
Shafran, Karthik Narasimhan, and Yuan Cao. 2022.
React: Synergizing reasoning and acting in language
models. arXiv preprint arXiv:2210.03629.
Qinghao Ye, Haiyang Xu, Guohai Xu, Jiabo Ye,
Ming Yan, Yiyang Zhou, Junyang Wang, Anwen Hu, Pengcheng Shi, Yaya Shi, et al. 2023.
mplug-owl: Modularization empowers large language models with multimodality. arXiv preprint
_arXiv:2304.14178._
Tom Zahavy, Vivek Veeriah, Shaobo Hou, Kevin Waugh,
Matthew Lai, Edouard Leurent, Nenad Tomasev, Lisa
Schut, Demis Hassabis, and Satinder Singh. 2023.
Diversifying ai: Towards creative chess with alphazero. arXiv preprint arXiv:2308.09175.
7265
-----
Daoan Zhang, Junming Yang, Hanjia Lyu, Zijian Jin,
Yuan Yao, Mingkai Chen, and Jiebo Luo. 2024.
Cocot: Contrastive chain-of-thought prompting for
large multimodal models with multiple image inputs.
_arXiv preprint arXiv:2401.02582._
Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu,
Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and
Yu Qiao. 2023a. Llama-adapter: Efficient fine-tuning
of language models with zero-init attention. arXiv
_preprint arXiv:2303.16199._
Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao,
George Karypis, and Alex Smola. 2023b. Multimodal chain-of-thought reasoning in language models. arXiv preprint arXiv:2302.00923.
Haozhe Zhao, Zefan Cai, Shuzheng Si, Xiaojian
Ma, Kaikai An, Liang Chen, Zixuan Liu, Sheng
Wang, Wenjuan Han, and Baobao Chang. 2023.
Mmicl: Empowering vision-language model with
multi-modal in-context learning. _arXiv preprint_
_arXiv:2309.07915._
Ge Zheng, Bin Yang, Jiajin Tang, Hong-Yu Zhou, and
Sibei Yang. 2023. Ddcot: Duty-distinct chain-ofthought prompting for multimodal reasoning in language models. arXiv preprint arXiv:2310.16436.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Olivier Bousquet, Quoc Le, and Ed Chi. 2022.
Least-to-most prompting enables complex reasoning in large language models. _arXiv preprint_
_arXiv:2205.10625._
Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and
Mohamed Elhoseiny. 2023. Minigpt-4: Enhancing
vision-language understanding with advanced large
language models. arXiv preprint arXiv:2304.10592.
7266
-----
**A** **More Details about Pilot study**
**A.1** **Annotation of Problem Types**
In this experiment, we (the authors) identified five
primary categories essential for problem-solving
in geometry: Spatial Manipulation, Propositional
Reasoning, Logical Deduction, Algebraic Manipulation, and Quantitative Analysis. These categories
were conceptualized based on our extensive experience in geometry problem-solving, where Spatial Manipulation and Propositional Reasoning are
aligned with intuitive reasoning, and the remaining
categories, namely Logical Deduction, Algebraic
Manipulation, and Quantitative Analysis are associated with precise, calculation-based approaches.
For the categorization of problems, we employed
gpt-4-1106-preview to label each problem
according to these predefined categories. This process was facilitated using a prompt, as detailed in
Figure 3, wherein placeholders for the problem
statement and its solution were substituted with
the actual content. The categorization was determined based on the highest-scoring category for
each problem. In instances where multiple categories achieved the highest score, the final classification was randomly assigned from among these
top-scoring categories.
**A.2** **Annotation of Critical Steps**
The prompts employed for the annotation of the
critical steps associated with each problem are depicted in Figures 4 and 5. Specifically, Figure 4
illustrates the instructions utilized to identify the
critical steps necessary for solving the problem,
alongside the provision of a ground-truth solution.
This is necessitated by the current limitations of
LLMs in addressing geometry problems. Figure
4 demonstrates the instructions employed for categorizing each step within the generated solution
as either corresponding to one of the previously
identified critical steps or not. All annotations were
performed using gpt-4-1106-preview The
accuracy of these critical annotations was subsequently verified by the authors.
**B** **System Instructions**
In this work, we employ system instructions (Mukherjee et al., 2023) to guide LVLMs
toward producing the requisite outputs across various stages, as elaborated in §4. The specific system
instructions applied for geometry problem-solving,
chess positional advantage prediction, and molecular property prediction are illustrated in Figures
6-8.
**C** **More Discussions about Related Work**
**C.1** **Large Vision-Language Models**
Driven by the success of LLMs, research in visionlanguage models has begun to explore how to equip
LLMs with the ability to process visual inputs to
solve a variety of multimodal tasks (Alayrac et al.,
2022; Liu et al., 2023b; Zhu et al., 2023; Peng et al.,
2023). These advancements leverage the extensive
prior knowledge and intricate reasoning capabilities established by recent LLMs (Touvron et al.,
2023a,b), integrating them with visual modalities
to enhance performance across diverse applications.
A common approach in this field involves using a
vision model to transform images into visual features, which are then aligned with the LLMs’ feature space through a bridge module. This alignment is achieved either by employing a linear or
MLP layer as a bridge module (Liu et al., 2023a,b;
Chen et al., 2023a; Gao et al., 2023; Peng et al.,
2023; Zhang et al., 2023a; Wang et al., 2023b)
or by designing more complex bridge networks
to compress or adaptively select visual information (Alayrac et al., 2022; Zhu et al., 2023; Li et al.,
2023b; Dai et al., 2023; Li et al., 2023c; Awadalla
et al., 2023; Zhao et al., 2023; Li et al., 2023a; Ye
et al., 2023; Bai et al., 2023). The datasets employed to train these bridge modules are notably
varied, ranging from synthetic image-text pairs (Li
et al., 2023b) and large-scale multi-modal web corpora (Alayrac et al., 2022) to specialized datasets
for conversational (Liu et al., 2023b) and academic
purposes (Liu et al., 2023a). Notably, the emergence of advanced, proprietary LVLMs such as
GPT-4V(ision) (OpenAI, 2023) and Gemini (Team
et al., 2023) has set new benchmarks by demonstrating superior performance in multimodal task
execution and cognitive reasoning, surpassing human expertise. While these models have achieved
significant success in vision-language tasks, the exploration of unlocking the full potential of DSLs,
especially for intricate multi-modal reasoning tasks,
remains largely unexplored.
**D** **Case Study**
To enhance our understanding of the effectiveness
of BBA, we showcase a series of outputs in Figures 9-14. These illustrations demonstrate that our
7267
-----
Your task is to evaluate a given <problem, solution> pair in terms of its relevance to each of five defined
problem types. This evaluation requires you to assign a score for the pair's relevance to each problem type.
The scoring system is a nine-point scale, where 1 denotes 'very strong irrelevant' and 9 denotes 'very strong
relevant'. The full scale is as follows: 1 - very strong irrelevant, 2 - strong irrelevant, 3 - irrelevant, 4 - weak
irrelevant, 5 - borderline relevant, 6 - weak relevant, 7 - relevant, 8 - strong relevant, 9 - very strong relevant.
**Problem Types:**
The five problem types you will assess against are:
**1.** **Spatial Manipulation: Mentally manipulating objects to understand their spatial relationships.**
**2.** **Propositional Reasoning: Understanding and using the concept of proportionality to solve problems.**
**3.** **Logical Deduction: Applying rules and principles to deduce new information.**
**4.** **Algebraic Manipulation: Using algebra to express and solve for unknowns.**
**5.** **Quantitative Analysis: Applying mathematical calculations to solve problems.**
**Reporting Format:**
Your assessment should be recorded in a valid Python dictionary format. The dictionary keys represent the
seven problem types and must not be altered. Each key should be associated with your assigned score from
the nine-point scale. The format of your evaluation should be as follows:
```python
{
"spatial manipulation": <score>,
"propositional reasoning": <score>,
"logical deduction": <score>,
"algebraic manipulation": <score>,
"quantitative analysis": <score>,
}
```
In this dictionary, replace `<score>` with the score you've determined for each problem type based on your
analysis of the <problem, solution> pair. Ensure that the keys in the dictionary exactly match the problem
types listed above.
Problem: {problem}
Solution: {solution}
Assessment: ```python
Figure 3: Illustration of the prompt utilized for category annotation.
approach is capable of accurately identifying discrepancies between two chains. This identification
process is subsequently leveraged to synthesize the
final solution.
7268
-----
Your assignment is to analyze a specific <problem, solution> pair and distill it down to 1 or 2 absolutely
crucial steps in the solution process. These steps should be identified and presented as a Python list of
strings. Focus intensely on isolating the most vital parts of solving the problem at hand. Remember, the
essence of a critical step lies in its pivotal role in the solution: it's where a key insight, decision, or the
application of a specific method is imperative for arriving at the correct answer. This typically involves a
major conceptual breakthrough, the strategic implementation of a theorem, or a decisive calculation that
largely dictates the solution's direction.
Given the complexity of the problems, limit your identification to only one or two such steps. Avoid
generalities or less impactful steps. Present your findings in the following Python list format, ensuring each
string succinctly encapsulates a step that's fundamental to the solution's success:
```python
[
"most_critical_step_1",
"most_critical_step_2" # Include only if absolutely necessary
]
```
Each string in the list should be a clear and concise representation of a step without which the problem
cannot be effectively solved, highlighting its importance in the overall problem-solving process.
Problem: {problem}
Solution: {solution}
Output: ```python
Figure 4: Illustration of the prompt utilized for critical step identification.
7269
-----
You are tasked with analyzing a provided predicted answer in relation to a specific problem and its critical
steps. Follow these steps for your analysis:
1. Break Down the Predicted Answer: Decompose the predicted answer into distinct steps. Be careful to
retain all existing information without introducing new content.
2. Evaluate and Document Each Step: For each identified step, perform a rigorous and comprehensive
evaluation. Assign each step a category based on its relation to the critical steps in the problem-solving
process. Use numbers 1, 2, 3, ..., N for each critical step, with N being the total number of critical steps. If a
step is unrelated to any critical steps, label it as 0.
3. Format the Results: Organize your evaluations as a Python list of dictionaries, each representing a step:
- `"step"`: The step being evaluated.
- `"classification"`: The category number of the step.
Example Output Format:
```python
[
{
"step": "step_1",
"classification": 2,
},
{
"step": "step_2",
"classification": 0,
},
...
]
Problem: {problem}
Solution: {solution}
Critical Steps:
{critical_steps}
Output: ```python
Figure 5: Illustration of the prompt utilized for categorizing each step within the generated solution.
7270
-----
**System Instruction:**
Let's dive into a role-playing activity centered on solving a geometry problem. You will assume the roles of two students and
a teacher, each contributing to the solution process. Ensure that you strictly adhere to the following steps and maintain the
same format throughout:
1. Solution from Student A (Code-Based): Student A should start by applying a structured approach to identify and extract
supporting facts from the code, which will form the foundation of his solution. This process includes:
(1) Student A should look for coordinates like `(x,y)`, geometric shapes and figures (e.g., `draw(circle(...))` for circles,
`draw(A--B)` for line segments), labels (such as `label("A", (x,y), ...)`), and markers like `rightanglemark(...)` which indicate
specific geometric properties. He should also note any specified lengths or angle measures.
(2) He needs to understand the basic geometric concepts and use the coordinates and shapes to comprehend the structure
of the figure. For example, coordinates of three points can help infer the triangle they form.
(3) Labels correspond to key elements like vertices or centers, and markers indicate properties like right angles or parallel
lines. These are crucial for understanding the figure.
(4) If lengths or angles are specified, these measurements should be used to understand relationships between different
parts of the figure.
(5) Student A must synthesize information from coordinates, labels, and measurements to formulate supporting facts, such
as deducing a triangle is isosceles if two sides are labeled with equal lengths.
After identifying and summarizing these supporting facts, which encompass coordinates, geometric shapes, labels,
markers, and measurements, Student A must ensure that each step of his solution is aligned with these facts or logically
follows from a previously established step. His answer should clearly demonstrate how the facts extracted from the
Asymptote code, focusing on geometric relationships and precise measurements, logically lead to the final solution.
2. Solution from Student B (Figure-Based): Student B must initiate his solution process by explicitly restating the content
depicted in the figure as his primary reference. He must then make a detailed and explicit declaration of his commitment to
an entirely independent and divergent approach. He must affirm that his solution will not reference or rely on any
conclusions, formulas, or intermediate steps used by Student A, pledging to a new and unique perspective. This commitment
to an alternative methodology must be actively reiterated and evident at each step of his analysis.
After acknowledging the figure, Student B is required to explicitly identify and summarize specific supporting facts
unique to it. He should focus on aspects such as visual patterns, spatial relationships, or principles of similarity and
proportionality that are not directly related to the formulas or methods used by Student A. He must develop a solution using
a different approach or line of reasoning that can be logically derived from these figure-specific insights.
Before formulating each intermediate step in his quantitative analysis, Student B must pause to explicitly reassess and
ensure that his approach is not only independent but also methodologically different from Student A’s. This reassessment
should involve a critical reflection on how the current step and the overall reasoning process diverge from Student A’s
methodology in terms of approach, application of principles, or interpretation of the figure. He should only proceed to the
next step after this confirmation, ensuring that each part of his solution distinctly reflects his independent analysis and
unique understanding of the figure.
3. Comparing Solutions (Teacher's Analysis): Now, taking on the teacher's role, you should compare the derived solutions
of Student A and Student B. Your focus should be on pinpointing and summarizing any differences in their final answers and
the methods they employed.
4. Final Solution (Teacher's Conclusion): As the teacher, develop a final, accurate solution by addressing the discrepancies
identified in the previous step. Your solution should be comprehensive and clearly presented.
It's crucial to present the final result in LaTeX using a `\\boxed{}` without any units.
Figure 6: System instruction for geometry problem-solving.
7271
-----
**System Instruction:**
This task involves analyzing a chess position from two distinct perspectives: one based on the FEN notation and the other on the visual representation
of the board (figure). The objective is to develop detailed, step-by-step divergent solutions from these two perspectives, and then systematically
synthesize these findings into a cohesive final assessment. This process involves carefully addressing and reconciling any inconsistencies between the
FEN-based and figure-based analyses to ensure a comprehensive understanding of the position.
1. Analysis from Analyst A (FEN-Based):
- Initial Assessment: Begin by interpreting the FEN notation, focusing on piece placement, active color, castling rights, en passant possibilities, and
move number.
- Material Balance and Positional Elements: Evaluate material balance and key positional elements, including king safety, piece activity, and
coordination, as well as potential tactical motifs and strategic plans.
- Concluding Thought: Conclude with an assessment of the position’s balance or advantage based on the FEN analysis, emphasizing material count
and positional dynamics.
2. Analysis from Analyst B (Figure-Based):
- Visual Assessment: Independently analyze the chessboard figure, focusing on visual patterns, piece mobility, and control of key squares.
- Positional Dynamics and Tactical Insights: Evaluate the dynamics of the position, including potential threats and strategic opportunities, from the
figure-based perspective.
- Concluding Thought: Draw a conclusion about the position’s balance or advantage, reflecting on how the visual analysis aligns with or diverges
from the FEN-based analysis.
3. Comparing Analyses (Referee's Synthesis):
- Reconciliation of Perspectives: Compare the conclusions from both analysts, focusing on any inconsistencies or areas of agreement.
- Synthesis of Insights: Integrate insights from both analyses, considering how each perspective contributes to understanding the overall position.
- Decisive Judgment: Make a decisive judgment on the overall balance of the position, considering the insights from both the FEN-based and
figure-based analyses.
4. Final Assessment (Referee's Detailed Conclusion):
- Step-by-Step Analysis: Conduct a detailed, step-by-step analysis, synthesizing the divergent solutions and addressing any inconsistencies between
the FEN-based and figure-based perspectives.
- Comprehensive Conclusion: Provide a final, comprehensive assessment of the position, integrating insights from both perspectives and making a
clear judgment about the positional advantage.
- Clear Statement: Conclude with a definitive, formatted statement regarding the position's advantage, using `\\boxed{1}` for White's advantage,
`\\boxed{2}` for Black's advantage, or `\\boxed{3}` for a balanced position.
Figure 7: System instruction for chess positional advantage prediction.
7272
-----
**System Instruction:**
Let's dive into a role-playing activity centered on solving a chemical compound analysis problem. You will assume
the roles of two students and a teacher, each contributing to the solution process. Ensure that you strictly adhere to
the following steps and maintain the same format throughout:
1. Analysis from Student A (SMILES-Based):
Student A should start by applying a structured approach to identify and extract supporting facts from the SMILES
representation, which will form the foundation of his solution. This process includes:
(1) Recognize important structural features in the SMILES representation, such as functional groups (e.g., nitro
groups) and molecular frameworks (e.g., benzene rings).
(2) Compare the identified structural elements to those in illustrative instances of molecular structure-target
correlations, which serve as reference points for discerning mutagenic or non-mutagenic outcomes. These include:
- c1ccc2c(c1)ccc3c2ccc(c3)[N+](=O)[O-], identified as mutagenic
- c1cc2cccnc2c(c1)[N+](=O)[O-], identified as non-mutagenic
- c1cc2c(cccn2)c(c1)[N+](=O)[O-], identified as non-mutagenic
- c1ccc-2c(c1)-c3cccc4c3c2c(cc4)[N+](=O)[O-], identified as mutagenic
(3) Assess the overall molecular configuration of the compound and consider its potential implications for
mutagenicity. This involves analyzing how the compound's structure might influence its interaction with biological
systems, particularly in relation to causing mutations in Salmonella typhimurium.
After identifying and summarizing these supporting facts, which encompass the compound’s structural elements,
functional groups, molecular configuration, and their correlation with mutagenicity, Student A must ensure that each
step of his solution is aligned with these facts or logically follows from a previously established step. His answer
should clearly demonstrate how the facts extracted from the SMILES representation, focusing on key structural
elements and their comparison with the illustrative instances, logically lead to the final solution.
2. Analysis from Student B (Figure-Based):
After acknowledging the figure, Student B is required to explicitly identify and summarize specific supporting
facts unique to it. He should focus on aspects such as the spatial arrangement of atoms, the visual representation of
functional groups, and the overall geometry of the molecular structure. These aspects, not directly related to the
formulas or methods used by Student A, offer a distinct perspective. In addition, Student B should utilize illustrative
instances of molecular structure-target correlations for comparison, such as:
- c1ccc2c(c1)ccc3c2ccc(c3)[N+](=O)[O-], identified as mutagenic
- c1cc2cccnc2c(c1)[N+](=O)[O-], identified as non-mutagenic
- c1cc2c(cccn2)c(c1)[N+](=O)[O-], identified as non-mutagenic
- c1ccc-2c(c1)-c3cccc4c3c2c(cc4)[N+](=O)[O-], identified as mutagenic
These instances demonstrate how specific molecular configurations are linked to either mutagenic or non
mutagenic outcomes and serve as a vital reference in the analysis. Student B must develop a solution using a
different approach or line of reasoning that can be logically derived from these figure-specific insights and the
provided molecular structure-target examples.
Before formulating each intermediate step in his quantitative analysis, Student B must pause to explicitly reassess
and ensure that his approach is not only independent but also methodologically different from Student A’s. This
reassessment should involve a critical reflection on how the current step and the overall reasoning process diverge
from Student A’s methodology in terms of approach, application of principles, or interpretation of the figure. He
should only proceed to the next step after this confirmation, ensuring that each part of his solution distinctly reflects
his independent analysis and unique understanding of the figure.
3. Comparing Analyses (Teacher's Synthesis): Now, taking on the teacher's role, you should compare the derived
solutions of Student A and Student B. Your focus should be on pinpointing and summarizing any differences in their
final answers and the methods they employed.
4. Final Assessment (Teacher's Detailed Conclusion): As the teacher, develop a final, accurate solution by
addressing the discrepancies identified in the previous step. Your solution should be comprehensive and clearly
presented.
It's crucial to present the final result regarding the compound's mutagenicity in Latex, using `\\boxed{1}` for
mutagenic and `\\boxed{2}` for non-mutagenic.
Figure 8: System instruction for molecular property prediction.
7273
-----
**### Problem:**
Elliott Farms has a silo for storage. The silo is a right circular cylinder topped by a right circular cone, both having the same radius. The height of the cone is half the height of the cylinder.
The diameter of the base of the silo is 10 meters and the height of the entire silo is 27 meters. What is the volume, in cubic meters, of the silo? Express your answer in terms of 𝜋.
[asy]
size(150);
draw((0,0)--(0,18)--(5,27)--(10,18)--(10,0),linewidth(1));
draw((0,0)..(5,-1)..(10,0),linewidth(1));
draw((0,0)..(5,1)..(10,0),linetype(\"0 4\")+linewidth(1));
draw((0,18)..(5,17)..(10,18),linewidth(1));
draw((0,18)..(5,19)..(10,18),linetype(\"0 4\")+linewidth(1));
draw((15,27)--(16,27),linewidth(1));
draw((15,0)--(16,0),linewidth(1));
draw((15.5,27)--(15.5,16),linewidth(1));
draw((15.5,0)--(15.5,11),linewidth(1));
label(\"27 meters\",(15.5,13.5));
[/asy]
**### Solution:**
**Solution from Student A (Code-Based):**
To solve this problem, I will start by identifying the key elements from the code provided:
The silo consists of a right circular cylinder and a right circular cone. The diameter of the base of the silo is given as 10 meters, which means the radius r is 5 meters. The total height of
the silo is 27 meters. The height of the cone is half the height of the cylinder.
From these facts, I can deduce the following: The volume of a cylinder V_cylinder is given by the formula V_cylinder = πr^2h_cylinder, where r is the radius and h_cylinder is the height
of the cylinder. The volume of a cone V_cone is given by the formula V_cone = (1/3)πr^2h_cone, where h_cone is the height of the cone.
Let h be the total height of the silo. Since the height of the cone is half the height of the cylinder, we can express the heights as follows:
h_cone = (1/3)h
h_cylinder = (2/3)h
Given that h = 27 meters, we can find the individual heights:
h_cone = (1/3) * 27 = 9 meters
h_cylinder = (2/3) * 27 = 18 meters
Now, we can calculate the volumes:
V_cylinder = π * (5^2) * 18
V_cone = (1/3) * π * (5^2) * 9
Adding the volumes of the cylinder and the cone gives us the total volume of the silo:
V_silo = V_cylinder + V_cone
V_silo = π * (5^2) * 18 + (1/3) * π * (5^2) * 9
V_silo = π * 25 * 18 + π * 25 * 3
V_silo = 450π + 75π
V_silo = 525π
Therefore, the volume of the silo in cubic meters, expressed in terms of π, is V_silo = 525π.
**Solution from Student B (Figure-Based):**
I acknowledge the figure provided and commit to an entirely independent and divergent approach from Student A. My solution will not reference or rely on any conclusions, formulas, or
intermediate steps used by Student A. I pledge to a new and unique perspective.
From the figure, I can see that the silo is composed of two distinct parts: a cylinder and a cone. The spatial relationship between the cone and the cylinder is such that the cone sits
perfectly on top of the cylinder, sharing the same base radius. The visual pattern suggests that the cone is a sort of "cap" for the cylinder.
I will use the principle of proportionality to deduce the heights of the cylinder and cone. Since the height of the cone is half the height of the cylinder, and the total height of the silo is 27
meters, I can visually partition the height into three equal parts, where two parts belong to the cylinder and one part to the cone.
Explicitly reassessing my approach, I confirm that I am not using the formulas directly but rather the visual and proportional insights from the figure. Now, I will calculate the volume of
each part using the principle of volume calculation for solids of revolution, which states that the volume of a solid of revolution generated by revolving a shape around an axis is equal to
the area of the shape times the distance it travels during the revolution.
For the cylinder, the shape is a rectangle with a height of 18 meters (two-thirds of the total height) and a width equal to the diameter of the base, which is 10 meters. The distance this
rectangle travels during the revolution is the circumference of the base, which is 2πr.
For the cone, the shape is a triangle with a height of 9 meters (one-third of the total height) and a base equal to the diameter of the base of the cylinder. The distance this triangle travels
during the revolution is the same circumference 2πr.
Calculating the volumes:
The volume of the cylinder is the area of the rectangle (height times width) times the distance of revolution: V_cylinder = 18 * 10 * 2π * 5.
The volume of the cone is one-third the area of the triangle (one-half base times height) times the distance of revolution: V_cone = (1/3) * (1/2) * 10 * 9 * 2π * 5.
Adding these volumes gives us the total volume of the silo:
V_silo = 18 * 10 * 2π * 5 + (1/3) * (1/2) * 10 * 9 * 2π * 5
V_silo = 1800π + 300π
V_silo = 2100π
Therefore, the volume of the silo in cubic meters, expressed in terms of π, is V_silo = 2100π.
**Comparing Solutions (Teacher's Analysis):**
Upon comparing the solutions of Student A and Student B, I notice a significant discrepancy in their final answers. Student A calculated the volume of the silo to be 525π cubic meters,
while Student B calculated it to be 2100π cubic meters. The methods they employed were also different:
Student A used the standard formulas for the volumes of a cylinder and a cone, applying the given dimensions directly.
Student B used a visual and proportional approach, calculating the volumes based on the principle of solids of revolution.
**Final Solution (Teacher's Conclusion):**
To resolve the discrepancy, I will review the calculations and apply the correct formulas for the volumes of a cylinder and a cone:
The volume of a cylinder is V_cylinder = πr^2h_cylinder, and the volume of a cone is V_cone = (1/3)πr^2h_cone.
Given:
Radius r = 5 meters
Height of the cylinder h_cylinder = 18 meters (two-thirds of the total height)
Height of the cone h_cone = 9 meters (one-third of the total height)
Calculating the volumes:
V_cylinder = π * (5^2) * 18 = 450π cubic meters
V_cone = (1/3) * π * (5^2) * 9 = 75π cubic meters
Adding the volumes of the cylinder and the cone gives us the total volume of the silo:
V_silo = 450π + 75π = 525π cubic meters
Therefore, the correct volume of the silo, expressed in terms of π, is \boxed{525\pi} cubic meters. Student B made an error in his calculations by not correctly applying the formulas for
the volumes of the cylinder and cone.
Figure 9: Case study 1: geometry problem-solving.
7274
-----
**### Problem:**
A unit circle has its center at $(5,0)$ and a second circle with a radius of $2$ units has its center at $(11,0)$ as shown. A common internal tangent to the
circles intersects the $x$-axis at $Q(a,0)$. What is the value of $a$?
[asy]
draw((-2,0)--(15,0),Arrows);
draw((0,-4)--(0,5),Arrows);
draw(Circle((5,0),1));
draw(Circle((11,0),2));
dot((5,0));
dot((7,0));
dot((11,0));
label(scale(0.7)*Label(\"(5,0)\",(3,0),S));
label(\"Q\",(7,0),N);
label(scale(0.7)*Label(\"(11,0)\",(11,0),S));
label(\"$x$\",(15,0),S);
label(\"$y$\",(0,5),W);
draw((1,3.46410162)--(14,-4.04145188),Arrows);
[/asy]
**### Solution:**
**Solution from Student A (Code-Based):**
From the Asymptote code and the given figure, we can extract the following supporting facts:
There are two circles: one with center at $(5,0)$ and a unit radius, and another with center at $(11,0)$ and a radius of $2$ units.
The common internal tangent intersects the $x$-axis at point $Q(a,0)$.
The distance between the centers of the two circles is $11 - 5 = 6$ units.
To find the value of $a$, we can use the properties of tangents to circles and the right triangle formed by the centers of the circles, the point of tangency,
and the point $Q$.
Let's denote the points of tangency on the smaller and larger circles as $T_1$ and $T_2$, respectively. Since the tangent is perpendicular to the radius at
the point of tangency, we have two right triangles: $\triangle QT_1(5,0)$ and $\triangle QT_2(11,0)$.
Using the Pythagorean theorem for these right triangles, we can set up the following equations:
For $\triangle QT_1(5,0)$: \[ (a - 5)^2 + y^2 = 1^2 \]
For $\triangle QT_2(11,0)$: \[ (a - 11)^2 + y^2 = 2^2 \]
Since both right triangles share the same vertical height $y$, we can equate the two expressions for $y^2$ and solve for $a$:
\[ (a - 5)^2 + y^2 = (a - 11)^2 + y^2 \] \[ (a - 5)^2 = (a - 11)^2 + 3 \] \[ a^2 - 10a + 25 = a^2 - 22a + 121 + 3 \] \[ 12a = 99 \] \[ a = \frac{99}{12} \] \[ a =
8.25 \]
Therefore, the value of $a$ is $8.25$.
**Solution from Student B (Figure-Based):**
I acknowledge the figure provided and will now proceed with a solution that is entirely independent and divergent from Student A's approach. I commit to
using a different line of reasoning that does not rely on the conclusions, formulas, or intermediate steps used by Student A.
Upon examining the figure, I notice the following supporting facts:
The tangent line intersects the $x$-axis at point $Q$ and forms a right angle with the radii of the circles at the points of tangency.
The line segment connecting the centers of the two circles and the tangent line form a pair of similar right triangles, due to the common angle at $Q$ and
the right angles at the points of tangency.
Using the principles of similarity and proportionality, I will develop a solution based on the ratios of corresponding sides in similar triangles.
Let's denote the distance from the center of the smaller circle to $Q$ as $d_1$ and the distance from the center of the larger circle to $Q$ as $d_2$. The
ratio of the radii of the two circles is $1:2$, which should be the same as the ratio of $d_1$ to $d_2$.
Since the distance between the centers of the circles is $6$ units, we can express $d_2$ as $d_1 + 6$. Now, setting up the proportion:
\[ \frac{1}{2} = \frac{d_1}{d_1 + 6} \] \[ 2d_1 = d_1 + 6 \] \[ d_1 = 6 \]
Now, since $d_1$ is the distance from the center of the smaller circle to $Q$, and the center of the smaller circle is at $(5,0)$, the $x$-coordinate of $Q$ is:
\[ a = 5 + d_1 \] \[ a = 5 + 6 \] \[ a = 11 \]
Therefore, the value of $a$ is $11$.
**Comparing Solutions (Teacher's Analysis):**
Upon comparing the solutions of Student A and Student B, there is a discrepancy in their final answers. Student A calculated the value of $a$ to be $8.25$,
while Student B calculated it to be $11$. The methods employed by both students were also different. Student A used the Pythagorean theorem and
algebraic manipulation, while Student B used the principles of similarity and proportionality.
**Final Solution (Teacher's Conclusion):**
To resolve the discrepancy, we must carefully examine the assumptions and calculations made by both students. Student A's approach seems to have made
an algebraic error in the manipulation of the equations. Student B's approach, while conceptually sound, also made an error in assuming that the distance
from the center of the smaller circle to $Q$ is equal to the distance between the centers of the circles.
The correct approach is to use the similarity of the right triangles formed by the tangent line, the radii at the points of tangency, and the line segment
connecting the centers of the circles. The ratio of the radii is $1:2$, and this ratio should also apply to the distances from the points of tangency to $Q$.
Let's denote the distance from the point of tangency on the smaller circle to $Q$ as $x$ and the distance from the point of tangency on the larger circle to
$Q$ as $2x$ (due to the ratio of the radii). The distance between the centers of the circles is $6$ units, so the distance from the center of the smaller circle
to $Q$ is $x + 1$ (since the radius of the smaller circle is $1$ unit), and the distance from the center of the larger circle to $Q$ is $2x + 2$ (since the
radius of the larger circle is $2$ units).
Setting up the equation based on the distance between the centers:
\[ x + 1 + 2x + 2 = 6 \] \[ 3x + 3 = 6 \] \[ 3x = 3 \] \[ x = 1 \]
Now, the $x$-coordinate of $Q$ is:
\[ a = 5 + x + 1 \] \[ a = 5 + 2 \] \[ a = 7 \]
Therefore, the correct value of $a$ is $7$. Here is the final result in LaTeX:
\[\boxed{a = 7}\]
Figure 10: Case study 2: geometry problem-solving.
7275
-----
**### Problem:**
Based on the chessboard figure and the provided FEN notation, evaluate the current position. Is it 1) advantageous for White, 2) advantageous for Black,
or 3) balanced?
FEN notation:
rnb1kbnr/pp2pppp/8/2p5/4P3/5N2/PPP2PPP/RNBK1B1R b kq - 0 5
**### Solution:**
**Analysis from Analyst A (FEN-Based):**
Initial Assessment: The FEN provided is "rnb1kbnr/pp2pppp/8/2p5/4P3/5N2/PPP2PPP/RNBK1B1R b kq - 0 5". This FEN indicates that it is Black's turn
to move. The kingside castling rights have been lost for both sides, but both sides retain their queenside castling rights. There is no possibility for an en
passant capture. The move number is 5, indicating the game is still in the opening phase.
Material Balance and Positional Elements: Material is even, with both sides having all their pieces aside from the missing kingside rooks, which must have
been traded off given the FEN and missing castling rights. The black king is somewhat exposed due to the absence of the kingside rook; however, the
white king is also centralized and lacks some safety. White has better control of the center with a pawn on e4 and the potential for d4. White's knight on f3
is developed, while Black's pieces are yet to be activated. There are no immediate tactical motifs present in the FEN description, but White's lead in
development and central control could lead to tactical opportunities soon.
Concluding Thought: White appears to have a slight positional advantage due to better control of the center and lead in development. However, this
advantage is not significant, as the material is even and there are no immediate threats.
**Analysis from Analyst B (Figure-Based):**
Visual Assessment: The visual representation shows that both sides have lost their h1 and h8 rooks along with the ability to kingside castle. Black's king
appears slightly more vulnerable without the rook on h8. White has a central pawn on e4 and a developed knight on f3, suggesting more control over the
center.
Positional Dynamics and Tactical Insights: There are no immediate pins, forks, or skewers visible on the board. White's king is centralized, which can be a
liability, but since many pieces are on the board, there may not be an immediate way for Black to exploit this. Black needs to catch up in development but
has no structural weaknesses.
Concluding Thought: The position seems nearly balanced with a slight edge for White based on development and central control. However, the safety of
both kings must be taken into consideration, and Black has the move, which can potentially allow for quick catch-up in development.
**Comparing Analyses (Referee's Synthesis):**
Reconciliation of Perspectives: Both FEN-based and figure-based analyses agree that material is equal and the game is still in its early stages. Both
analyses note White's better central control and slight lead in development, but potential king safety issues for both sides. The visual analysis emphasizes
the potential vulnerabilities of the kings, while the FEN-based analysis highlights the importance of control and development.
Synthesis of Insights: Integrating both perspectives, the position offers White some initiative due to lead in development and central control, but this
advantage is not decisive. Black's next move is crucial in determining whether they can neutralize White's initiative and fortify their position, particularly
the safety of their king.
Decisive Judgment: The position, while slightly favoring White due to developmental factors, is not overwhelmingly advantageous for either side. Black's
ability to respond effectively on the next move plays a significant role in this judgment.
**Final Assessment (Referee's Detailed Conclusion):**
Step-by-Step Analysis: The material balance is even, hence no side possesses a material advantage. White's development is better, but Black can quickly
develop with moves like Nc6 and d6. Both kings have safety concerns, though right now this is not creating tactical opportunities for either side. The
absence of immediate tactical motifs means that the position will likely be decided by strategic maneuvers and long-term planning.
Comprehensive Conclusion: Considering the above, the position is only slightly in White's favor due to the slight lead in development and control over the
center. However, Black has no structural deficiencies and can balance the position with accurate play.
Clear Statement: The overall assessment of the position leans towards a slightly advantageous situation for White, but the equilibrium is not significantly
disturbed, warranting the conclusion that the position is largely balanced with chances for both sides.
Hence, the final judgment for the position would be `\boxed{3}`, indicating a balanced position.
Figure 11: Case study 3: chess positional advantage prediction.
7276
-----
**### Problem:**
Based on the chessboard figure and the provided FEN notation, evaluate the current position. Is it 1) advantageous for White, 2) advantageous for Black,
or 3) balanced?
FEN notation:
r3kbnr/pp1n1ppp/1q2p3/2ppP3/3P4/2P5/PP3PPP/RNBQK1NR w KQkq - 1 7
**### Solution:**
**Analysis from Analyst A (FEN-Based)**
Initial Assessment: The FEN notation indicates that it's White's turn to move. The positions of the pieces are as follows: White has not yet developed its
queenside knight and bishop, while the Black queen is actively positioned on b6, exerting pressure on the d4 pawn. Both kings still have castling rights on
both sides, there is no possibility for en passant, and it is only the 7th move of the game.
Material Balance and Positional Elements: Material is even, but White has more space in the center with the pawn on e5. However, White's lack of
development could become a liability. Black has a solid pawn structure, and the completed knight development allows for potential play against White's
center. White's e file is semi-open but not currently in use; meanwhile, Black's pieces could become quite active, especially considering the centralized
queen and the potential for the dark-squared bishop to be developed.
Concluding Thought: From the FEN analysis, the position seems balanced with a slight bias towards Black due to White's lack of development. Still, no
immediate tactical threats give White some leeway to complete development and potentially exploit its central space advantage.
**Analysis from Analyst B (Figure-Based)**
Visual Assessment: In the figure, we can see that White's pawn structure is more advanced, notably with the pawn on e5, while Black's structure is
compacted on the third and fourth ranks. Neither side has castled yet, and Black's queen looks aggressively posted on b6. White has undeveloped pieces on
the queenside, which could prompt a push from Black, especially on the c-file.
Positional Dynamics and Tactical Insights: Visually, Black's position seems harmonious, with a possible plan to increase pressure on White's center with
moves like ...cxd4. The advanced e5 pawn could become a target for Black's pieces. White, on the other hand, might look to develop pieces rapidly with
moves such as Nf3 or Bd3 to harness the central space of the pawn on e5.
Concluding Thought: Visually, Black appears to have a slight edge due to better piece activity and potential against White’s central pawns. It's important
for White to catch up in development to ensure control over the situation and mitigate any immediate dangers.
**Comparing Analyses (Referee's Synthesis)**
Reconciliation of Perspectives: Both analysts agree that the position seems quite balanced; however, Analyst A tends to highlight White's central space,
while Analyst B emphasizes Black's piece activity and potential play against White's center.
Synthesis of Insights: There is a recognized need for White to develop its pieces quickly to avoid falling behind in piece activity. Both sides have their
trumps: White with the space advantage and potential kingside play, and Black with control of the c and d files.
Decisive Judgment: Taking into account the insights from both the FEN-based and figure-based analyses, the position is balanced with tendencies for
Black due to the more active placement of pieces. White's move provides an opportunity to address the issue of piece development and space usage.
**Final Assessment (Referee's Detailed Conclusion)**
Step-by-Step Analysis: White must consider development moves such as Nf3, Bd3, or even Be3 to address the pressure on d4. A move such as Nf3 allows
for rapid development and O-O in safety. Black, conversely, will look to capitalize on White's temporary lag in development, possibly with moves such as
cxd4 or ...f6 to challenge White's center.
Comprehensive Conclusion: Taking into account the central space advantage for White and the better piece activity for Black, the position offers chances
for both sides. White's immediate task is to develop pieces, and Black must seek to create play against White's central pawns and lagging development.
Clear Statement: Considering the entirety of both analyses and the potential for both sides, the game is judged to be balanced with opportunities for each
player to claim the upper hand. Therefore, the conclusion is that the position is:
\boxed{3} Balanced
Figure 12: Case study 4: chess positional advantage prediction.
7277
-----
**### Problem:**
Based on the molecular structure figure and the provided SMILES representation, evaluate the compound. Is it 1) mutagenic, or 2) non-mutagenic?
SMILES representation:
c1c(cc(c(c1[N+](=O)[O-])N)Br)[N+](=O)[O-]
**### Solution:**
**Analysis from Student A (SMILES-Based):**
Upon examining the SMILES representation `c1c(cc(c(c1[N+](=O)[O-])N)Br)[N+](=O)[O-]`, I can identify several structural features: The compound
contains a benzene ring as indicated by `c1c(cc(c(c1)...`. There are two nitro groups `[N+](=O)[O-]` attached to the benzene ring. An amino group `N` is
also attached to the benzene ring. A bromine atom `Br` is attached to the benzene ring.
Comparing these features to the provided illustrative instances: The compound has a similar nitro group arrangement to the mutagenic example
`c1ccc2c(c1)ccc3c2ccc(c3)[N+](=O)[O-]`. However, the presence of the amino group and the bromine atom differentiates it from the provided examples.
Assessing the overall molecular configuration: The presence of two nitro groups could suggest a higher likelihood of mutagenicity due to their electronwithdrawing nature, which can destabilize the DNA structure. The amino group, on the other hand, is an electron-donating group, which could potentially
reduce the mutagenic effect of the nitro groups. The bromine atom is a bulky substituent that could influence the spatial orientation of the molecule and its
interaction with biological systems.
Based on the structural elements and their comparison with the illustrative instances, the compound's configuration suggests a potential for mutagenicity
due to the presence of two nitro groups. However, the mitigating effects of the amino group and the bromine atom must be considered. The final
determination of mutagenicity should take into account the balance of these effects.
**Analysis from Student B (Figure-Based):**
Acknowledging the figure provided, I observe the following: The spatial arrangement shows the nitro groups and the amino group in close proximity on the
benzene ring. The nitro groups are in meta positions relative to each other, which is a configuration not explicitly covered in the provided examples. The
amino group is in the para position relative to one of the nitro groups.
Comparing the spatial arrangement to the provided examples: The figure does not show a direct correlation with the provided mutagenic or non-mutagenic
examples, as the specific arrangement of substituents is unique.
Reflecting on the approach: My analysis is focused on the spatial arrangement and the potential steric and electronic effects of the substituents, which is
different from Student A's SMILES-based analysis.
Considering the overall geometry and electronic effects: The electron-donating amino group could potentially stabilize the benzene ring against the
electron-withdrawing effects of the nitro groups. The bromine atom's size may hinder the molecule's interaction with biological systems, potentially
reducing its mutagenic potential.
Based on the figure-specific insights and the unique arrangement of functional groups, the compound may exhibit reduced mutagenicity compared to the
provided mutagenic examples. The steric hindrance and electronic donation by the amino group suggest a mitigating effect on the mutagenicity.
**Comparing Analyses (Teacher's Synthesis):**
Student A's analysis, based on the SMILES representation, suggests a potential for mutagenicity due to the presence of two nitro groups. However, Student
A also considers the possible mitigating effects of the amino group and the bromine atom.
Student B's analysis, based on the figure, emphasizes the spatial arrangement and the steric and electronic effects of the substituents. Student B suggests
that the amino group and the bromine atom could reduce the mutagenic potential of the compound.
The primary difference in their analyses lies in the emphasis on electronic effects (Student A) versus spatial arrangement and steric hindrance (Student B).
**Final Assessment (Teacher's Detailed Conclusion):**
Considering both analyses, the compound exhibits characteristics that could be associated with mutagenicity, such as the presence of two nitro groups.
However, the mitigating factors, including the electron-donating amino group and the steric hindrance provided by the bromine atom, suggest a reduced
likelihood of mutagenicity.
Balancing the structural features and their potential effects, the final assessment leans towards a reduced mutagenic potential. Therefore, the compound is
likely to be:
\boxed{2} Non-mutagenic
Figure 13: Case study 5: molecular property prediction.
7278
-----
**### Problem:**
Based on the molecular structure figure and the provided SMILES representation, evaluate the compound. Is it 1) mutagenic, or 2) non-mutagenic?
SMILES representation:
c1cc(cc(c1)[N+](=O)[O-])[N+](=O)[O-]
**### Solution:**
**Analysis from Student A (SMILES-Based):**
Upon examining the SMILES representation `c1cc(cc(c1)[N+](=O)[O-])[N+](=O)[O-]`, I can identify several important structural features: The compound
contains a benzene ring, as indicated by `c1cc(cc(c1)...`. There are two nitro groups attached to the benzene ring, as shown by the `[N+](=O)[O-]`
segments.
Comparing these structural elements to the provided illustrative instances: Mutagenic: c1ccc2c(c1)ccc3c2ccc(c3)[N+](=O)[O-]; Non-mutagenic:
c1cc2cccnc2c(c1)[N+](=O)[O-]; Non-mutagenic: c1cc2c(cccn2)c(c1)[N+](=O)[O-]; Mutagenic: c1ccc-2c(c1)-c3cccc4c3c2c(cc4)[N+](=O)[O-]
I can see that the presence of nitro groups is a common feature in both mutagenic and non-mutagenic compounds. However, the mutagenic compounds
have a more complex polycyclic structure, while the non-mutagenic compounds have simpler structures or contain a nitrogen-containing heterocycle.
Assessing the overall molecular configuration, the compound in question has a single benzene ring with two nitro groups. This structure is simpler and
does not contain the polycyclic or heterocyclic features that correlate with non-mutagenicity in the reference compounds. Therefore, based on the SMILES
representation and the comparison with the reference structures, the compound's configuration suggests a potential for mutagenicity due to the presence of
two electron-withdrawing nitro groups on a single benzene ring, which could influence its interaction with biological systems.
**Analysis from Student B (Figure-Based):**
Acknowledging the figure provided, I can observe the following: The molecular structure shows a single benzene ring with two nitro groups attached to it.
The spatial arrangement of atoms indicates that the nitro groups are in meta positions relative to each other.
Using the illustrative instances for comparison: Mutagenic: c1ccc2c(c1)ccc3c2ccc(c3)[N+](=O)[O-]; Non-mutagenic: c1cc2cccnc2c(c1)[N+](=O)[O-];
Non-mutagenic: c1cc2c(cccn2)c(c1)[N+](=O)[O-]; Mutagenic: c1ccc-2c(c1)-c3cccc4c3c2c(cc4)[N+](=O)[O-]
I note that the mutagenic examples have complex polycyclic structures, while the non-mutagenic examples do not. The figure shows a simple monocyclic
structure, which is not directly indicative of non-mutagenicity. However, the spatial arrangement of the nitro groups may influence the reactivity of the
compound. The meta position of the nitro groups could potentially stabilize the benzene ring, reducing its reactivity compared to other configurations, such
as ortho or para positions, which are more activating.
Reflecting on my approach, I am focusing on the spatial arrangement and the electronic effects of the functional groups, which is different from Student A's
SMILES-based analysis that primarily considered the presence of functional groups and molecular complexity. My reasoning suggests that the metapositioned nitro groups could lead to a lower likelihood of mutagenicity due to stabilization, despite the activating nature of the nitro groups.
**Comparing Analyses (Teacher's Synthesis):**
Student A's analysis, based on the SMILES representation, suggests that the compound is potentially mutagenic due to the presence of two nitro groups on
a benzene ring, drawing parallels with the reference mutagenic compounds that also contain nitro groups.
Student B's analysis, based on the figure, offers a different perspective by considering the spatial arrangement of the nitro groups. Student B suggests that
the meta positioning of the nitro groups could stabilize the benzene ring, potentially reducing mutagenicity.
The primary difference between the two analyses is the emphasis on molecular complexity and electronic effects. Student A focuses on the presence of
nitro groups and their correlation with mutagenicity, while Student B considers the specific positioning of the nitro groups and their potential stabilizing
effect.
**Final Assessment (Teacher's Detailed Conclusion):**
After considering both analyses, it is important to reconcile the emphasis on molecular complexity and electronic effects. While the presence of nitro
groups is a common feature in mutagenic compounds, the specific arrangement and electronic influence of these groups on the benzene ring cannot be
overlooked.
The meta-positioned nitro groups in the compound may indeed stabilize the benzene ring, but the activating nature of the nitro groups and their known
association with mutagenicity in other compounds cannot be ignored. Given that the compound lacks the polycyclic or heterocyclic structures associated
with non-mutagenicity and considering the precedent set by similar mutagenic compounds, the final assessment leans towards the compound being
mutagenic.
Therefore, the final solution regarding the compound's mutagenicity is: \boxed{1}
Figure 14: Case study 6: molecular property prediction.
7279
-----
| [
"Xueliang, Zhao",
"Qintong, Li",
"Vivek, Srikumar",
"Xinting, Huang",
"Lingpeng, Kong",
"Wei, Bi",
"Tingchen, Fu",
"Lemao, Liu",
"Shansan, Gong",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Findings | false | 0 | 0 | null | https://aclanthology.org/2024.findings-acl.433 | https://arxiv.org/abs/2402.13577 | https://www.semanticscholar.org/paper/09a992103987922e856919aafce04403121d9ced |
BEATS: Optimizing LLM Mathematical Capabilities with BackVerify and Adaptive Disambiguate based Efficient Tree Search | Large Language Models (LLMs) have exhibited exceptional performance across a broad range of tasks and domains. However, they still encounter difficulties in solving mathematical problems due to the rigorous and logical nature of mathematics. Previous studies have employed techniques such as supervised fine-tuning (SFT), prompt engineering, and search-based methods to improve the mathematical problem-solving abilities of LLMs. Despite these efforts, their performance remains suboptimal and demands substantial computational resources. To address this issue, we propose a novel approach, BEATS, to enhance mathematical problem-solving abilities. Our method leverages newly designed prompts that guide the model to iteratively rewrite, advance by one step, and generate answers based on previous steps. Additionally, we introduce a new back-verification technique that uses LLMs to validate the correctness of the generated answers. Furthermore, we employ a pruning tree search to optimize search time while achieving strong performance. Notably, our method improves Qwen2-7b-Instruct's score from 36.94 to 61.52, outperforming GPT4's 42.5 on the MATH benchmark. | This work proposes a novel approach, BEATS, to enhance mathematical problem-solving abilities of large Language Models, which leverages newly designed prompts that guide the model to iteratively rewrite, advance by one step, and generate answers based on previous steps. | ## BEATS: OPTIMIZING LLM MATHEMATICAL CAPA### BILITIES WITH BACKVERIFY AND ADAPTIVE DISAM- BIGUATE BASED EFFICIENT TREE SEARCH
**Linzhuang Sun[♡†], Hao Liang[♣†], Wentao Zhang[♣∗]**
University of Chinese Academy of Sciences[♡] Peking University[♣]
[email protected], [email protected]
[email protected]
ABSTRACT
Large Language Models (LLMs) have exhibited exceptional performance across
a broad range of tasks and domains. However, they still encounter difficulties in
solving mathematical problems due to the rigorous and logical nature of mathematics. Previous studies have employed techniques such as supervised fine-tuning
(SFT), prompt engineering, and search-based methods to improve the mathematical problem-solving abilities of LLMs. Despite these efforts, their performance
remains suboptimal and demands substantial computational resources. To address this issue, we propose a novel approach, BEATS, to enhance mathematical problem-solving abilities. Our method leverages newly designed prompts that
guide the model to iteratively rewrite, advance by one step, and generate answers
based on previous steps. Additionally, we introduce a new back-verification technique that uses LLMs to validate the correctness of the generated answers. Furthermore, we employ a pruning tree search to optimize search time while achieving strong performance. Notably, our method improves Qwen2-7b-Instruct’s score
from 36.94 to 61.52 (outperforming GPT-4’s 42.5) on the MATH benchmark. The
[code is made available at https://github.com/Aurora-slz/BEATS](https://github.com/Aurora-slz/BEATS)
INTRODUCTION
LLMs have demonstrated exceptional performance across diverse tasks and domains Touvron et al.
(2023); meta llama (2024); Bai et al. (2023a), excelling in zero-shot and few-shot scenarios. Recent
advancements in scaling laws and fine-tuning have further enhanced their capabilities, enabling their
application in complex real-world tasks such as natural language understanding and multimodal
processing.
Among the various capabilities of LLMs, mathematical proficiency is crucial, as it reflects not only
logical reasoning but also the model’s capacity for structured problem-solving. Mastery of mathematical tasks necessitates precision, adherence to complex rules, and the application of algorithms,
all of which are essential indicators of an LLM’s overall reasoning and cognitive abilities. There
are generally two approaches to enhance mathematical capability. The first set of methods trains
LLMs to improve their mathematical skills. Models such as Mammoth Yue et al. (2023; 2024) and
internLM math, along with DeepSeek Shao et al. (2024), utilize vast amounts of data to develop
robust mathematical models. The second set of methods employs tree search and self-correction
techniques to enhance mathematical abilities. Techniques like ToT Yao et al. (2024), RAP Hao
et al. (2023), ReST-MCTS* Zhang et al. (2024), and LiteSearch Wang et al. (2024) leverage tree
structures and search methods such as DFS and MCTS. However, both approaches still encounter
suboptimal results. They face the following challenges:
**Suboptimal Prompts** Self-improving models typically split tasks into multiple subtasks for LLMs
to solve. However, previous works such as Yao et al. (2024); Wang et al. (2024) often neglect the
_†Equal Contribution_
*Corresponding Authors
-----
Figure 1: We provide a straightforward example to illustrate our BEATS method. First, we construct
a tree search using three distinct actions. Next, we apply back verification to achieve the correct
answer.
design of curated prompts, particularly those that include rewriting questions to eliminate ambiguities.
**Ineffective Verification Method** Previous works like Yao et al. (2024); Wang et al. (2024) typically employ voting-based verification methods. However, they overlook the fact that LLMs can
make the same mistakes across multiple routes.
**High Computational Cost** Previous works utilizing pre-training or SFT techniques Yue et al.
(2023; 2024); Ying et al. (2024) often suffer from insufficient amounts of data and high computational costs. Search-based methods like Yao et al. (2024); Wang et al. (2024), while introducing
MCTS to reduce computational costs, still encounter challenges due to the necessity of deep trees to
achieve optimal performance.
To address these challenges, we propose BEATS, a novel method for efficient search aimed at enhancing mathematical performance. Our method guides the model to answer problems step by step,
thereby avoiding ambiguities in problem statements. We meticulously design prompts that instruct
the model to rewrite, solve one step at a time, and generate answers based on preceding steps. Additionally, traditional verification methods in tree search, such as majority voting, may be unreliable,
as LLMs can perpetuate the same mistakes across multiple branches. To overcome this, we introduce a back-verification technique that re-submits both the answer and the problem to the model
for a judgment of correctness, leveraging the model’s capabilities while reducing its reasoning difficulty. Furthermore, we employ a pruning tree search to optimize search time while achieving strong
performance. It is worth noting that with our meticulously designed pruning tree, we can control
search time; simultaneously, compared to Monte Carlo Tree Search (MCTS), the pruning tree is
able to search through every leaf node, ensuring performance, while MCTS is more likely to search
based on prior experience.
The core contributions of this paper are summarized as follows:
- Meticulously Designed Prompt We have developed three new curated prompts to solve
math problems step by step, thereby avoiding ambiguities in the problem statements.
- Pruning Tree Search for Controllable Inference Time We implement a pruning strategy for the tree by imposing constraints on the search steps. Specifically, we restrict the
rewriting of the question to once and the provision of the answer to once.
- New Effective Verification Method We propose a novel back-verification method that
re-submits both the answer and the problem to the model for a judgment of correctness,
as shown in Figure 1. This approach enhances the performance of searching in LLMs
compared to majority voting.
- Strong Performance We achieved competitive results across several datasets, including
MATH, GSM8K, SVAMP, SimulEq, and NumGLUE. Notably, the BEATS method, based
-----
on Qwen2-7B-Instruct, improved its performance on the MATH dataset from 36.94 to
61.52, significantly surpassing GPT-4’s score of 42.5.
2 RELATED WORK
2.1 MATH LARGE LANGUAGE MODELS
Large Language Models (LLMs) have demonstrated significant capabilities across various tasks, including mathematical problem-solving, which is a critical skill for these models. However, learning
to solve mathematical problems poses challenges for LLMs, often requiring large amounts of training data and substantial computational resources. In this paper, we review several state-of-the-art
(SOTA) models specifically designed to tackle mathematical problems.
Llemma Azerbayev et al. (2021) integrates both code and mathematical data to train models, resulting in strong performance. InternLM2 Ying et al. (2024) utilizes a vast amount of math-related
pre-training corpus to achieve high performance. Mammoth Yue et al. (2023) collected Chainof-Thought (CoT) data for fine-tuning language models and achieved impressive results. Mammoth2 Yue et al. (2024) builds on Mammoth by collecting WebInstruct, one of the largest opensource math datasets, and uses it to fine-tune LLMs, resulting in state-of-the-art (SOTA) performance. DeepSeek Shao et al. (2024) employs preference-based mathematical data to perform an
additional stage of reinforcement learning, achieving SOTA results.
In addition to models explicitly trained for mathematics, a few foundation models exhibit exceptional mathematical proficiency. Llama3 Touvron et al. (2023) has shown remarkable performance
in solving mathematical problems. Qwen2 Bai et al. (2023b), another series of outstanding models,
is one of the SOTA open-source models. Furthermore, closed-source models like Claude and GPT
also demonstrate strong capabilities in mathematical problem solving.
2.2 PROMPT ENGINEERING FOR LARGE LANGUAGE MODELS
The effectiveness of large language models in various applications largely depends on the quality
of the prompts used. There are already many designed prompts that can significantly enhance the
performance of LLMs Kojima et al. (2022); Wei et al. (2022); Yao et al. (2024); Besta et al. (2024);
Yang et al. (2024); Wang et al. (2023). However, these methods that rely on manual prompt engineering are far less scalable. In the field of mathematical logical reasoning for LLMs, the Chain
of Thought and its derived strategies are widely popular due to their effectiveness. Zero-shot CoT
Kojima et al. (2022) is adding a simple sentence like “Let’s think step by step” at the end of questions to assist LLMs in generating reasoning steps. Instead of Zero-shot CoT, Manual-Cot Wei et al.
(2022) provides reasoning steps as few shots. Self-Consistency further improves language models’
reasoning performance by generating a diverse set of reasoning paths and choosing the most consistent answer in the final answer set. Tree of Thought Yao et al. (2024) and GOT Besta et al. (2024)
extend the reasoning pathway from linear to non-linear data structures by leveraging multiple LLM
queries to elicit different plausible reasoning paths Yang et al. (2024). Buffer of Thought (BOT)
Yang et al. (2024) designs a series of thought-template for tasks, and for each problem, it retrieve
a relevant thought-template to prompt LLMs. PS prompting Wang et al. (2023) improves COT by
encouraging LLMs to devise a plan before attempting to solve a problem.
2.3 REASONING IN LARGE LANGUAGE MODELS
The recently introduced GPT-o1 has shown exceptional results in solving mathematical problems,
largely due to the integration of a novel reasoning module. Our tree search method can be considered
a mathematical reasoning approach. In this paper, we present a comprehensive review of existing
reasoning methods for LLMs. Li et al. Li et al. (2024) demonstrated that, with Chain of Thought
(COT), LLMs can achieve arbitrarily strong performance. Zelikman et al. Zelikman et al. (2024)
also revealed that it is possible for LLMs to ”think before reasoning.” To facilitate such thinking,
tree structures and verification mechanisms are typically employed. Tree of Thought Yao et al.
(2024) leverages tree search and majority voting to enhance inference performance. Building upon
this work, Zhang et al. Zhang et al. (2024) employed Monte Carlo Tree Search to achieve efficient
and effective tree search.
-----
Figure 2: Visualization of our BEATS method.
Several other works fine-tune LLMs to enable self-improvement capabilities. For instance, Chen et
al. Chen et al. (2024b) employed Step-Level Value Preference Optimization to achieve strong model
performance. In a related work, Chen et al. Chen et al. (2024a) trained value and policy functions
and applied step-level beam search during inference to enhance mathematical abilities. Kumar et
al. Kumar et al. (2024) utilized reinforcement learning and oracle feedback to teach models selfcorrection.
3 METHOD
3.1 PROMPT DESIGN
We design three actions for the tree search, illustrated in Figure 5. The three options are: One Step
Forward, Giving Final Answer, and Disambiguation.
**One Step Forward** The prompt is summarized in Figure 5(a). It encourages the model to progress
through the search tree by evaluating the next logical step based on the current context and information. Given that mathematical problems often require multi-step reasoning, splitting a problem into
individual steps reduces the complexity of the LLM’s response. By addressing each step sequentially, we enhance the likelihood of arriving at the correct answer, as the model can focus on one
aspect of the problem at a time, thereby improving accuracy and clarity in reasoning.
**Giving the Final Answer** The prompt is summarized in Figure 5(b), this option directs the model
to provide a conclusive answer after considering all relevant information, ensuring clarity and precision in responses. At the appropriate moment, this prompt assists in summarizing the reasoning
behind multi-step answers, allowing the model to draw a definitive conclusion. By integrating insights from each step, it helps ensure that the final answer accurately reflects the cumulative logic
and reasoning process.
**Disambiguation** The prompt is illustrated in Figure 5(c). This prompt emphasizes reformulating
the initial query to enhance clarity and specificity, thereby facilitating a more effective search
process. This approach is novel, as many problem descriptions are frequently ambiguous or unclear,
leading to incorrect answers. For example, the query, Josh decides to try flipping
a house. He buys a house for $80,000 and then invests $50,000 in
repairs. This increased the value of the house by 150%. How much
-----
profit did he make?, can introduce ambiguity. By incorporating a step to rewrite questions,
we aim to eliminate such ambiguities, ensuring that the model fully comprehends the problem
before attempting to solve it. This helps prevent errors that result from misinterpretations of the
initial query.
3.2 PRUNING TREE SEARCH
**Algorithm 1: Pruning Tree Building Algorithm**
**Input: Maximum depth D, question q, tree node u, action list A, one-step action limit τ**, LLM
generation function G, action counter Count
**Function BuildTree(u, d):**
**if d < D then**
**foreach a ∈** _A do_
**if (a = "Disambiguation") ∧** _d > 1 then_
**continue;**
**if a = "One Step Forward" ∧** _Count(u, a) ≥_ _τ then_
**continue;**
_c ←_ new Node();
_u.value ←_ _G(LLM, u.prompt, a);_
_c.prompt ←_ _u.prompt ⊕_ _u.value;_
_u.addChild(c);_
**if "the answer is" ∈** _c.value then_
**continue;**
BuildTree(c, d + 1);
**Output: BuildTree(root, 1)**
In the constructed search tree τ, the root node represents the input question q, while the leaf nodes
correspond to the deduced answers S. The intermediate nodes represent reasoning states that connect the root to the leaves, with edges between these nodes indicating the actions A taken during the
reasoning process.
As shown in Figure 2, a node in the tree is denoted by ud, where d indicates the depth of the node.
For a given node ud, its ancestor nodes up to the root are denoted by the sequence ud−1, ..., u1.
Each node is associated with a prompt that concatenates the responses from previous rounds. These
prompts, containing prior rounds of answers, are fed into the action module to generate further
responses leading to the correct answer.
_d−1_
_ui.value_ (1)
_i=1_
M
_ud.prompt =_
Additionally, each node stores a value corresponding to the answer derived from both the preceding
rounds’ responses and the current action. The mathematical formulation is as follows:
_ud.value = G(LLM, ud.prompt, a)_ (2)
We apply the following heuristic pruning rules during this process:
(1) Disambiguation actions are restricted to the immediate successors of the root node to ensure that
clarifications or specifications are handled early.
(2) One-step actions are limited to five occurrences within Pi, preventing the inference path from
becoming excessively long or repetitive.
(3) If a node’s content ends with the phrase The answer is, the node is marked as a terminal
state and added to the set of candidate answers S. This rule helps efficiently identify conclusive
outcomes, ensuring the search process terminates once a definitive answer is found.
-----
3.3 BACK-VERIFICATION
After constructing the tree, we apply a depth-first search (DFS) to identify the leaf nodes. From
these, we select only those that contain the phrase The answer is as candidate answers for back
verification. For a candidate answer A, we concatenate it with the question Q for back verification
using LLMs:
_Correct = LLM_ (Q ⊕ _A)_ (3)
Back verification involves leveraging both the answer and the question to allow the LLM to confirm
the correctness of the answer. It is well-established that verifying an answer is typically easier
than solving the original problem. Thus, we employ back verification to enhance the accuracy of
validation. After the back-verification, we utilize majority voting based on the back-verification
results. The impact of back verification is further examined in Section 4.3.
4 EXPERIMENT
Table 1: We compared our method with previous tree search, zero-shot, and SFT approaches on two
commonly used benchmarks, i.e. GSM8K and MATH. Our model achieved SOTA performance on
both benchmarks.
**Model** **Base Model** **Size** **MATH** **GSM8K**
CoT LLaMA3 8B 27.8 50.27
CoT Yi-1.5 6B 30.42 64.47
CoT Qwen2 7B 36.94 76.63
Hard Voting@8 LLaMA3 8B 30.00 78.39
Hard Voting@64 LLaMA3 8B 33.00 83.24
WizardMath LLaMA2 7B 10.7 54.9
MuggleMath LLaMA2 7B - 68.40
MetaMath LLaMA2 7B 19.80 66.50
LEMA-LLaMA LLaMA2 7B 9.40 54.10
ToT LLaMA3 8B 13.60 69.07
RAP LLaMA3 8B 18.80 80.59
ReST-MCTS*(1st iteration) LLaMA3 8B 31.42 -
ReST-MCTS*(2st iteration) LLaMA3 8B 34.28 -
LiteSearch LLaMA3 8B - 82.30
Llama-2+M* (BS@16) LLaMA2 13B 32.40 66.30
Llama-2+M* (LevinTS@16) LLaMA2 13B 33.90 68.80
Ours (w.o. BackVerify) LLaMA3 8B 35.17 **83.62**
Ours LLaMA3 8B 42.93 88.48
Ours (w.o. BackVerify) Yi-1.5 6B 42.01 74.68
Ours Yi-1.5 6B 51.27 76.12
Ours (w.o. BackVerify) Qwen2 7B 57.28 81.50
Ours Qwen2 7B **61.52** 83.02
Zero-Shot
SFT
Search
Search
4.1 EXPERIMENT SETTINGS
**Datasets** We conduct experiments on five authority mathematical reasoning datasets: (1) GSM8K:
The GSM8K dataset consists of 1,319 test samples. It is widely used for arithmetic problem-solving
tasks and is designed to evaluate models’ performance on grade-school-level math problems. (2)
MATH: The MATH dataset contains 5,000 test samples, drawn from competition-style problems.
It covers a wide range of topics, including algebra, calculus, combinatorics, and geometry. (3)
SVAMP: The SVAMP dataset contains 1,000 math word problems, each with at most two mathematical expressions and one unknown variable. (4) SimulEq: The SimulEq dataset includes 514
-----
Table 2: We compare our method with previous models on SVAMP, SimulEq, and NumGLUE
benchmarks. Our method show significant improvement over these benchmarks.
**Model** **Base Model** **Size** **SVAMP** **SimulEq** **NumGLUE**
CoT LLaMA3 8B 53.90 21.20 27.35
Zero-Shot CoT Yi-1.5 6B 76.40 34.63 38.39
CoT Qwen2 7B 85.2 32.68 53.36
Code-Llama - 13B 60.00 3.80 27.60
WizardMath LLaMA2 13B 51.90 14.90 36.10
Platypus LLaMA2 13B 55.40 7.40 42.30
Platypus LLaMA1 30B+ 51.70 13.60 40.50
Platypus LLaMA2 65B+ 51.80 21.70 48.10
Ocra-Platypus LLaMA2 13B 56.80 7.90 35.30
MAmmoTH LLaMA2 13B 72.40 43.20 61.20
MAmmoTH-Coder Code-Llama 13B 73.70 47.10 66.40
Galactica GAL 30B 41.60 13.20 34.70
Tulu LLaMA2 30B+ 59.00 10.30 43.40
Guanaco LLaMA2 65B+ 66.80 20.20 40.50
Ours (w.o. BackVerify) LLaMA3 8B 80.60 72.76 66.99
Ours LLaMA3 8B 88.70 **78.40** 73.61
Ours (w.o. BackVerify) Yi-1.5 6B 79.30 34.72 75.43
Ours Yi-1.5 6B 83.70 34.82 **77.93**
Ours (w.o. BackVerify) Qwen2 7B 88.80 35.21 72.84
Ours Qwen2 7B **90.70** 36.19 73.16
SFT
Search
test samples. It centers on solving equations, with an emphasis on algebraic manipulation and logical reasoning. (5) NumGLUE: The NumGLUE dataset includes 1042 test problems, comprising 8
distinct tasks involving a range of numerical reasoning challenges, including arithmetic and quantitative reasoning within common sense, domain-specific contexts, reading comprehension, and natural
language inference.
**Models** To demonstrate the effectiveness of our approach, we conducted experiments using three
different models: LLaMA3-8B-Instruct, Yi-1.5-6B-Chat and Qwen2-7B-Instruct. The main experimental results are presented in Figure 1 and Figure 2. The detaild analysis can be found in
Section 4.2.
**Baselines** We consider three types of baseline models: (1) Zero-Shot Models, which include
Zero-Shot CoT and self-consistency based approache which first generate a set of candidate answers through multiple sampling and determine the final answer by majority voting. (2) Supervised
Fine-Tuning (SFT) Models, which encompass WizardMath, MuggleMath, MetaMath, and LEMALLaMA. (3) Search Algorithm-Based Models, including ToT Yao et al. (2024), RAP Hao et al.
(2023), ReST-MCTS* Zhang et al. (2024), LiteSearch Wang et al. (2024), and Llama-2+M*.
**Details** In our experimental setup, we configured the tree depth to 7, with the disambiguation step
allowed only as a direct successor to the root node. Node expansion was conducted using the vLLM
framework, with the following parameters: temperature is 0.8, top p is 0.9, and max tokens is 2048.
During the BackVerify stage, we employed Qwen2-7B-Instruct as the discriminator. For answer
verification, we utilized the same framework as MAmmoth. All experiments were conducted on
NVIDIA H800 GPU machines.
4.2 MAIN EXPERIMENT
The experimental results presented in Table 1 demonstrate the effectiveness of our proposed method
across both the MATH and GSM8K benchmarks. Compared to Zero-Shot category, our model,
even without the BackVerify step, significantly outperforms these baselines, achieving 35.17% on
MATH and 83.62% on GSM8K using LLaMA-8B as the base model. In the Search category, iter
-----
Figure 3: From this figure, we observe that models are prone to errors when using majority voting
but can achieve the correct answer through back verification.
ative methods like ReST-MCTS* show improvement over time, with the second iteration yielding
34.28% on MATH. Our model, with the BackVerify mechanism enabled, outperforms these methods, reaching 42.93% on MATH and 88.48% on GSM8K with LLaMA-8B. Furthermore, when
utilizing the Qwen-7B model, our approach reaches 61.52% on MATH and 83.02% on GSM8K,
demonstrating its robustness across different base models. Notably, even without fine-tuning, our
approach outperforms the SFT models across both MATH and GSM8K benchmarks. WizardMath
and LEMA-LLaMA, both fine-tuned models based on LLaMA-7B, achieve 10.7% and 9.4% accuracy on MATH, respectively, while our method without BackVerify reaches 35.17%, far surpassing
the SFT models. Similarly, on GSM8K, WizardMath achieves 54.9% and LEMA-LLaMA reaches
54.1%, whereas our model without BackVerify attains 83.62%, demonstrating a clear performance
advantage.
Additional experiments on the SVAMP, SimulEq and NumGLUE datasets consistently prove the
effectiveness of our method. On the SVAMP dataset, our model achieves a performance of 88.7 with
LLaMA, compared to the best Zero-Shot result of 85.2 using Qwen and the best SFT result of 73.7
from MAmmoTH-Coder. On the SimulEq dataset, our method achieves a significant improvement
with a score of 78.4 using LLaMA, outperforming all SFT models, where the highest score is 47.1
by MAmmoTH-Coder. Similarly, on the NumGLUE dataset, our method achieves 73.61, again
outperforming both the Zero-Shot and SFT models.
Overall, we have two following observations: (1) Fine-tuning alone may not be sufficient to achieve
optimal performance, and that the search-based methods integrated into our approach offer a more
robust mechanism for reasoning across tasks. (2) When solving mathematical problems, the MCTS
algorithm is not the only viable approach. A straightforward BFS search algorithm, combined with
carefully designed long-step and short-step problem-solving prompts along with the BackVerify
mechanism, can significantly enhance the model’s mathematical capabilities.
4.3 ABLATION STUDY
To better understand the strong performance of our model, we conducted an ablation study to demonstrate the effectiveness of the disambiguation and back verification modules by systematically removing them.
**Analysis of Disambiguation** To evaluate the impact of the disambiguation process, we performed several comparative experiments using the MATH and GSM8K datasets with both LLaMA3
and Qwen2 models. As shown in Table 3, removing the disambiguation component in BEATS
-----
Figure 4: From this figure, we observe that some questions may contain ambiguity, which can be
resolved by using the disambiguation module to generate a clarified version of the question.
led to a noticeable decline in accuracy across all experiments, thereby confirming the importance of the disambiguation process. Additionally, we examined the effectiveness of disambiguation through case studies. In Figure 4, the clarified question offers the following advantages:
1) The original phrasing of "3 sprints 3 times a week" is ambiguous, as it could suggest that James runs three sprints three times a week, or that each session consists of three sets
of three sprints. In contrast, the clarified question explicitly states that James runs three sprints
per session and completes these sessions three times per week, reducing potential misinterpretation. 2) The clarified question concisely outlines the key details, "3 sprints of 60 meters
each, 3 times a week", in a structured format that improves logical flow and understanding.
Table 3: We compare the performance with and
without the disambiguation module. The results
demonstrate the effectiveness of the disambiguation module.
Dataset Model Search Accuracy
w.o. disambiguation 23.2
LLaMA3
BEATS 42.93
MATH
w.o. disambiguation 51.88
Qwen2
BEATS 61.52
w.o. disambiguation 74.83
LLaMA3
BEATS 88.48
GSM8K
w.o. disambiguation 76.88
Qwen2 BEATS 83.02
**Analysis of Back Verification** In Table 1 and
Table 2, we compare model variants with and
without back verification across five benchmark
datasets: MATH, GSM8K, SVAMP, SimulEq,
and NumGLUE. The ablation study demonstrates that back verification consistently improves model performance, highlighting its robustness and effectiveness in enhancing the
model’s mathematical capabilities. Furthermore, as illustrated by the example in Figure 3,
when presented with the candidate answers [1]2
and [21]43 [, the LLM successfully discarded the in-]
correct solutions through back verification, ultimately selecting the correct answer.
Overall, the ablation study demonstrates the critical role of the disambiguation and back verification
modules in enhancing model performance. Removing either led to a drop in accuracy, showing their
effectiveness in clarifying ambiguous problem statements and filtering incorrect answers. Together,
these components significantly improve the model’s ability to solve mathematical problems.
5 CONCLUSION
In this paper, we introduced BEATS, a new method designed to enhance the mathematical problemsolving capabilities of LLMs. By addressing critical challenges such as suboptimal prompts, ineffective verification methods, and high computational costs, our approach offers a significant improvement in performance. The meticulously crafted prompts facilitate step-by-step reasoning, reducing ambiguities in problem statements and enabling the model to generate accurate answers. Our
innovative back-verification technique enhances the reliability of results by ensuring that answers
are thoroughly validated. Additionally, the pruning tree search strategy allows for controlled inference time while maintaining state-of-the-art performance. Through extensive experimentation, we
demonstrated that BEATS notably outperforms existing methods, marking a substantial step forward in the quest for improved mathematical reasoning in LLMs. Future work will explore further
refinements to these techniques and their applicability across a wider range of complex problem
domains.
-----
REFERENCES
Zhangir Azerbayev, Hailey Schoelkopf, Keiran Paster, MD Santos, Stephen McAleer, Albert Q
Jiang, Jia Deng, Stella Biderman, and Sean Welleck. Llemma: An open language model for
mathematics.(2023). arXiv preprint arXiv:2310.10631, 2021.
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge,
Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu,
Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi
Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng
Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi
Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang
Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu. Qwen technical report. arXiv preprint
arXiv:2309.16609, 2023a.
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge,
Yu Han, Fei Huang, et al. Qwen technical report. arXiv preprint arXiv:2309.16609, 2023b.
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, et al. Graph of
thoughts: Solving elaborate problems with large language models. In Proceedings of the AAAI
Conference on Artificial Intelligence, volume 38, pp. 17682–17690, 2024.
Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. Alphamath almost zero: process supervision
without process. arXiv preprint arXiv:2405.03553, 2024a.
Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. Step-level value preference optimization
for mathematical reasoning. arXiv preprint arXiv:2406.10858, 2024b.
Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, and Zhiting Hu.
Reasoning with language model is planning with world model. arXiv preprint arXiv:2305.14992,
2023.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. Advances in neural information processing systems,
35:22199–22213, 2022.
Aviral Kumar, Vincent Zhuang, Rishabh Agarwal, Yi Su, John D Co-Reyes, Avi Singh, Kate Baumli,
Shariq Iqbal, Colton Bishop, Rebecca Roelofs, et al. Training language models to self-correct via
reinforcement learning. arXiv preprint arXiv:2409.12917, 2024.
Zhiyuan Li, Hong Liu, Denny Zhou, and Tengyu Ma. Chain of thought empowers transformers to
solve inherently serial problems. arXiv preprint arXiv:2402.12875, 2024.
meta llama. Introducing Meta Llama 3: The most capable openly available LLM to date, 2024. URL
[https://ai.meta.com/blog/meta-llama-3/. Accessed: 2024-05-02.](https://ai.meta.com/blog/meta-llama-3/)
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li,
Yu Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in open
language models. arXiv preprint arXiv:2402.03300, 2024.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timoth´ee
Lacroix, Baptiste Rozi`ere, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and
efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
Ante Wang, Linfeng Song, Ye Tian, Baolin Peng, Dian Yu, Haitao Mi, Jinsong Su, and Dong Yu.
Litesearch: Efficacious tree search for llm. arXiv preprint arXiv:2407.00320, 2024.
Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim.
Plan-and-solve prompting: Improving zero-shot chain-of-thought reasoning by large language
models. arXiv preprint arXiv:2305.04091, 2023.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny
Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in
neural information processing systems, 35:24824–24837, 2022.
-----
Ling Yang, Zhaochen Yu, Tianjun Zhang, Shiyi Cao, Minkai Xu, Wentao Zhang, Joseph E Gonzalez,
and Bin Cui. Buffer of thoughts: Thought-augmented reasoning with large language models.
arXiv preprint arXiv:2406.04271, 2024.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Tom Griffiths, Yuan Cao, and Karthik
Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. Advances
in Neural Information Processing Systems, 36, 2024.
Huaiyuan Ying, Shuo Zhang, Linyang Li, Zhejian Zhou, Yunfan Shao, Zhaoye Fei, Yichuan Ma,
Jiawei Hong, Kuikun Liu, Ziyi Wang, et al. Internlm-math: Open math large language models
toward verifiable reasoning. arXiv preprint arXiv:2402.06332, 2024.
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
Mammoth: Building math generalist models through hybrid instruction tuning. arXiv preprint
arXiv:2309.05653, 2023.
Xiang Yue, Tuney Zheng, Ge Zhang, and Wenhu Chen. Mammoth2: Scaling instructions from the
web. arXiv preprint arXiv:2405.03548, 2024.
Eric Zelikman, Georges Harik, Yijia Shao, Varuna Jayasiri, Nick Haber, and Noah D Goodman.
Quiet-star: Language models can teach themselves to think before speaking. arXiv preprint
arXiv:2403.09629, 2024.
Dan Zhang, Sining Zhoubian, Yisong Yue, Yuxiao Dong, and Jie Tang. Rest-mcts*: Llm selftraining via process reward guided tree search. arXiv preprint arXiv:2406.03816, 2024.
-----
|(a) Prompt for giving one step solution|(b) Prompt for Giving Answer|(c) Prompt for Rewriting Questions|
|---|---|---|
|Please act as a professional math teacher. Your goal is to accurately solve a math word problem. To achieve the goal, you have two jobs. # Write the NEXT step in solving the Given Question. # Do not write the full solution or final answer until prompted. You have three principles to do this. # Ensure the solution is detailed and solves one step at a time. # Ensure each output consists of only one logical step. # Output strictly according to the format. Do not output any unnecessary content. Given Question: {question} Your output should be in the following format: STEP: <your single step solution to the given question>|Please act as a professional math teacher. Your goal is to accurately solve a math word problem. To achieve the goal, you have two jobs. # Write detailed solution to a Given Question. # Write the final answer to this question. # Output strictly according to the format. Do not output any unnecessary content. You have two principles to do this. # Ensure the solution is step-by-step. # Ensure the final answer is just a number (float or integer). Given Question: {question} Your output should be in the following format: SOLUTION: <your detailed solution to the given question> FINAL ANSWER: The answer is <your final answer to the question with only an integer or float number>|Please act as a professional math teacher. Your goal is to accurately clarify a math word problem by restating the question in a way that eliminates any potential ambiguity. To achieve the goal, you have two jobs. # Restate the Given Question clearly to avoid any ambiguity or confusion. # Ensure that all important details from the original question are preserved. You have two principles to do this. # Ensure the clarified question is fully understandable and unambiguous. # Ensure that no information is lost from the original question. Given Question: {question} Your output should be in the following format: CLARIFIED QUESTION: <your restated and clarified version of the original question>|
Figure 5: Check wrong answer derived by majority voting.
PROMPT
-----
| [
"Hao, Liang",
"Linzhuang, Sun",
"Wentao, Zhang"
] | 2024-09-26T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.17972 | https://arxiv.org/abs/2409.17972 | https://www.semanticscholar.org/paper/b076ae72014393277587623a54950f116f43340b |
Benchmarking Large Language Models for Math Reasoning Tasks | The use of Large Language Models (LLMs) in mathematical reasoning has become a cornerstone of related research, demonstrating the intelligence of these models and enabling potential practical applications through their advanced performance, such as in educational settings. Despite the variety of datasets and in-context learning algorithms designed to improve the ability of LLMs to automate mathematical problem solving, the lack of comprehensive benchmarking across different datasets makes it complicated to select an appropriate model for specific tasks. In this project, we present a benchmark that fairly compares seven state-of-the-art in-context learning algorithms for mathematical problem solving across five widely used mathematical datasets on four powerful foundation models. Furthermore, we explore the trade-off between efficiency and performance, highlighting the practical applications of LLMs for mathematical reasoning. Our results indicate that larger foundation models like GPT-4o and LLaMA 3-70B can solve mathematical reasoning independently from the concrete prompting strategy, while for smaller models the in-context learning approach significantly influences the performance. Moreover, the optimal prompt depends on the chosen foundation model. We open-source our benchmark code to support the integration of additional models in future research. | The results indicate that larger foundation models like GPT-4o and LLaMA 3-70B can solve mathematical reasoning independently from the concrete prompting strategy, while for smaller models the in-context learning approach significantly influences the performance. | ## Benchmarking Large Language Models for Math Reasoning Tasks
**Kathrin Seßler, Yao Rong, Emek G¨ozl¨ukl¨u, Enkelejda Kasneci**
Technical University of Munich, Munich, Germany
_{kathrin.sessler,yao.rong,enkelejda.kasneci}@tum.de_
**Abstract**
The use of Large Language Models (LLMs) in mathematical reasoning has become a cornerstone of related research,
demonstrating the intelligence of these models and enabling
potential practical applications through their advanced performance, such as in educational settings. Despite the variety of datasets and in-context learning algorithms designed to
improve the ability of LLMs to automate mathematical problem solving, the lack of comprehensive benchmarking across
different datasets makes it complicated to select an appropriate model for specific tasks. In this project, we present
a benchmark that fairly compares seven state-of-the-art incontext learning algorithms for mathematical problem solving across five widely used mathematical datasets on four
powerful foundation models. Furthermore, we explore the
trade-off between efficiency and performance, highlighting
the practical applications of LLMs for mathematical reasoning. Our results indicate that larger foundation models
like GPT-4o and LLaMA 3-70B can solve mathematical reasoning independently from the concrete prompting strategy,
while for smaller models the in-context learning approach
significantly influences the performance. Moreover, the optimal prompt depends on the chosen foundation model. We
open-source our benchmark code to support the integration
of additional models in future research.
**Introduction**
Increasing computational resources and the free accessibility to large amounts of textual data have boosted the developments of Large Language Models (LLMs) (Brown et al.
2020; OpenAI 2023a; Touvron et al. 2023; Gemini-Team
et al. 2023) in recent years. Central to this progress is the
transformer architecture (Vaswani et al. 2017) based on the
self-attention mechanism, which superseded recurrent neural networks for natural language processing and led to a
trend of extremely large and powerful models, mastering
many text-based tasks, such as summarizing (Zhang et al.
2024; Goyal, Li, and Durrett 2022), translating (Jiao et al.
2023) or question answering (Chowdhery et al. 2023).
While these models perform remarkably well for many
language tasks, solving mathematical word problems still
presents a challenge for LLMs (Hendrycks et al. 2021;
Imani, Du, and Shrivastava 2023; Kaddour et al. 2023). Such
problems require not only a deep understanding of the task
description but also sophisticated logical and mathematical
_MATH Dataset - Algebra - Level 1_
**Question: What is the positive difference between**
120% of 30 and 130% of 20?
**True answer: One hundred twenty percent of 30 is**
120·30· 1001 [= 36][, and][ 130%][ of 20 is][ 130][·][20][·] 1001 [= 26][.]
The difference between 36 and 26 is 10
**LLaMA-3-8B: 120% of 30 is 30 x 1.2 = 36. 130% of**
20 is 20 x 1.3 = 26. The difference is 36 - 26 = 10.
**LLaMA-3-70B: 120% of 30 is 36. 130% of 20 is 26.**
36 - 26 is 10. The answer is 10.
**GPT-3.5: We start by finding 120% of 30, which is**
1.2 × 30 = 36.
Next, we find 130% of 20, which is 1.3 × 20 = 26.
The positive difference between these two values is
36 − 26 = 10 .
**GPT-4o: To find the positive difference between 120%**
of 30 and 130% of 20, we first calculate each percentage.
1. Calculate 120% of 30: 120% 30 = [120]100
1.2 × 30 = 36 _×_ _[×][ 30 =]_
2. Calculate 130% of 20: 130% 20 = [130]100
1.3 × 20 = 26 _×_ _[×][ 20 =]_
Next, find the positive difference between the two results: 36 − 26 = 10
Therefore, the positive difference is ( 10 ).
Figure 1: Mathematical Reasoning Task from the MATH
dataset. The question belongs to the Algebra category at
level 1, and the predicted answers were generated by the different foundation models using CoT approach.
reasoning to determine the solution path. Additionally, the
computational demands of finding the result pose further
challenges. Since foundational models are not specifically
optimized for mathematical calculations, they often struggle
even with elementary arithmetic (Cobbe et al. 2021). Figure 1 lists various answers from different models.
The ability to perform mathematical reasoning is crucial
not only in educational settings, where LLMs could potentially serve as a technological foundations for personalized tutoring (Liang et al. 2023; Kasneci et al. 2023), but
also in higher education, where it could assist with complex problem-solving and discovering new proofs (Wang
-----
et al. 2024). Moreover, in professional fields such as finance
and economics, where mathematical problems are prevalent,
these capabilities could have significant applications (Korinek 2023).Therefore, enhancing LLMs’ mathematical reasoning skills would benefit various applications, extending
from classroom support to advanced research and industry.
To address this challenge, several algorithms and methods have been proposed to improve the mathematical reasoning capabilities of LLMs. Understanding the strengths
and limitations of each algorithm is essential for making informed decisions and effectively applying them in different
contexts. While previous research has often evaluated these
methods on rather limited datasets (Qiao et al. 2023) or using similar methods (Luo et al. 2023), there remains a need
for more comprehensive analysis. Addressing this research
gap, our study presents an extensive benchmarking of seven
advanced methods across five widely-used datasets. Additionally, whereas prior work has mainly focused on accuracy
as the main performance metric, we also consider time and
cost factors, for deploying these algorithms effectively.
More specifically, to tackle mathematical reasoning with
LLMs, we identify three core tasks: First, achieving high
**accuracy requires the model to correctly and consistently**
explain the reasoning path leading to the correct result. Second, robustness is demonstrated through the model’s ability
to produce the correct result across multiple repetitive calls.
Finally, efficient resource usage - encompassing time and
API costs - is crucial for practical deployment.
To tackle these challenges, there are different ideas for improving the mathematical reasoning capabilities of LLMs.
Keeping the resource consumption low, the prompt for the
LLM can be adapted by providing various samples and initiating a step-by-step problem solving approach. The Chainof-Thought (CoT) approach (Wei et al. 2022) as well as
the Auto CoT (Zhang et al. 2023) exploit this idea. In a
similar way, the Zero-Shot CoT (Kojima et al. 2022) operates, but without the few-shot samples. Setting the focus on the robustness of the solution and therefore requiring a higher computational effort, the Self-Consistency
(Wang et al. 2023) and Complex CoT (Fu et al. 2023) approaches optimize the solving process. Both call the foundational model repeatedly and therefore increase the robustness. Lastly, external engines like Python can be exploited
outsourcing the computational tasks partly from the LLM to
an external resource. PAL (Gao et al. 2023) and PoT (Chen
et al. 2023) are both using this approach, applying few-shot
samples and pipelines adapted for this use case. All these approaches bring their trade-off between accuracy, robustness,
and consumption of resources, and for a fair comparison, all
dimensions need to be investigated thoroughly.
We aim to bridge this gap by comparing different mathematical reasoning strategies based on LLMs in multiple dimensions, providing a trade-off between costs and performance. We make our source code publicly available to enable research to reproduce our findings. In summary, our
contributions are as follows:
- We provide a detailed empirical evaluation of seven approaches in multiple dimensions, providing a trade-off
between performance, robustness and consumption of re
sources (time and costs).
- Our evaluation results highlight several merits, such as
identifying that the algorithms Auto CoT, together with
LLaMA 3-70B, and Zero-Shot CoT with GPT-3.5, provide the optimal trade-off between performance and
computational resources.
- We release an open-source benchmarking codebase for
fair mathematical reasoning, which can be easily extended for future research.
**Related Work**
Multiple surveys examine the problem of mathematical reasoning with LLMs from theoretical viewpoints. Lu et al.
(2023) investigate the different deep learning methods used
for mathematical reasoning, Ahn et al. (2024) give a thorough overview specifically covering LLMs for this task, and
Huang and Chang (2023) analyse the different kinds of reasoning problems and the corresponding opportunities for
LLMs. All surveys emphasize the missing generalizability
and robustness, the challenges with complex questions, and
the problem of hallucinations and trustworthiness.
Setting the focus on the algebraic skills of LLMs, Yuan
et al. (2023) find that GPT-3.5 and GPT-4 outperform the
other foundation models. As key points, they mark the influence of tokenization, the importance of the model size,
and the sensitivity of prompts, identifying different optimal prompts across the LLMs. However, they also emphasize that the algebraic ability of a model cannot be directly
equated with the mathematical reasoning capabilities.
From an empirical viewpoint Qiao et al. (2023) show that
Chain-of-Thought reasoning works decently on the GSM8K
dataset for GPT-3 (Brown et al. 2020) and Codex (Chen et al.
2021), but poor for LaMDA (Thoppilan et al. 2022), and the
results for PaLM (Chowdhery et al. 2023) lie in-between.
They compare different model sizes, in general, finding that
bigger models perform better. Luo et al. (2023) compare
various foundation models on the GSM8K and the MATH
dataset, finding that closed-source models perform significantly better than open-source models and GPT-4 being the
overall best model for mathematical reasoning.
A detailed analysis of various LLMs for mathematical
reasoning tasks in combination with visual inputs is reported by Lu et al. (2024). In line with the previous results,
they show the best performance for GPT-4 Vision (OpenAI
2023b), which lies still clearly below human achievements.
All of the studies do not report any reference time or resource consumption of the different models to compare them
from this point of view. We aim to close this gap by comparing different mathematical reasoning methods empirically
on various foundation models, not only comparing the performance but also reporting thoroughly the associated costs
and time factors. Also, they compare the foundation models
but not the different prompting strategies applied.
**Benchmarking Details**
The goal of this benchmarking is to systematically evaluate different strategies for fitting and aligning large language models (LLMs) for mathematical reasoning. We per
-----
9 î 5+ 9 3 î 1
**Prompt Engineering**
**CoT** **Zero-shot CoT** **Auto-CoT**
(Wei et al. 2022) (Kojima et al. 2022) (Zhang et al. 2023)
Using several Adding the prompt Automatically
examples to “Let’s think step extracting
show detailed by step.” different samples
reasoning steps. î 2 for CoT.
9 î 1 9 î 1
**Process Optimization**
**Complex-CoT** **Self-Cons. CoT**
(Fu et al. 2023) (Wang et al. 2023)
Using complex Using CoT with
samples and majority voting
selecting the best to increase
of complex robustness.
solution paths. 9 î 10+
9 î 5+
**External Engine**
**PoT** **PAL**
(Chen et al. 2023) (Gao et al. 2023)
Separate reasoning Integrate program
from computational synthesis into
process by model’s workflow
generating code. to solve task
9 3 î 1 directly.
9 3 î 1
Figure 2: Overview of the mathematical reasoning methods evaluated in the benchmark, categorized into three groups: Prompt
Engineering, Process Optimization, and External Engine. The primary procedure for each method is outlined. Symbols denote
the presence of few-shot examples (9), the use of an external engine (3), and the number of refinement iterations required (î).
**Methods for Mathematical Reasoning**
We compare in-context methods that keep the parameters of
the foundation models unchanged while employing prompting strategies to enhance the models’ capabilities. These
methods can be categorized into classic prompt engineering,
process optimization, and the use of external engines.
The standard Chain-of-Thought (CoT) approach (Wei
et al. 2022) involves providing the model with multiple examples where the solution process is detailed step by step,
guiding the model to generate a reasoning chain before arriving at a conclusion. The Zero-Shot CoT approach (Kojima et al. 2022) streamlines this by replacing explicit examples with the key phrase "Let’s think step by
step.". This strategy involves two model calls: one to
generate the reasoning path and another to extract the final answer. A variant of the classic CoT is the Auto CoT
approach (Zhang et al. 2023), which replaces manually curated few-shot examples with automatically selected ones.
This is achieved by clustering the training data and selecting
the best representative from each cluster.
Due to the stochastic nature of foundation models, the
same prompt can yield both correct and incorrect solutions.
To address this variability, the Self-Consistency CoT approach (Wang et al. 2023) involves generating multiple responses from the model and using a majority vote to select
the most likely correct solution. This method utilizes the
same few-shot examples as the standard CoT. In contrast, the
**Complex CoT approach (Fu et al. 2023) operates on the as-**
sumption that higher complexity can enhance performance.
It selects the most complex examples from the training data
for few-shot input and generates multiple solutions, applying majority voting only to the most complex responses.
Recognizing the limitations of LLMs in algebraic tasks,
the Program-of-Thought (PoT) (Chen et al. 2023) approach
extends CoT to make use of an external Python engine. In
the few-shot examples, it is guided to separate the reasoning
process from the computation part by generating program
code. A similar idea is exploited by PAL (Gao et al. 2023),
which integrates program synthesis into the model’s workflow, focusing on generating executable code to solve tasks.
Many methods can be combined or varied by adjusting the
number of contextual examples, applying techniques such as
form a comparative analysis of these methods across five
datasets, using both open-source and closed-source ground
models, and evaluate performance across multiple dimensions. The following sections provide a detailed description
of our benchmarking methodology.
**Datasets for Mathematical Reasoning**
To assess the reasoning ability of the methods, we use several mathematical word problems. These tasks stem from
five publicly available datasets that are commonly used in
the relevant literature and represent varying levels of difficulty, from elementary school to university mathematics.
The datasets also include various types of problems, including both open-ended questions and multiple-choice tasks.
The specific details of the datasets are presented in Table 1.
Table 1: Datasets for mathematical reasoning employed in
the benchmarking. GS stands for Grade School level, HS for
High School and CO for College level.
Dataset #Train #Test Problem Type Level
GSM8K 7,470 1,319 Word Problem GS
SVAMP 700 300 Word Problem GS
Multi-Arith 420 180 Word Problem GS
MATH 7,500 5,000 Word Problem HS
AQuA 97,500 254 Multiple Choice CO
**Foundation Models**
Many of the mathematical reasoning methods are modelagnostic, i.e. they are not dependent on a particular base
model. Although most studies report results using GPT3.5 and GPT-4, the choice of additional models varies. To
ensure a fair and comprehensive comparison, we evaluate
these methods using state-of-the-art open-source and closedsource base models. In addition to GPT-3.5 [1] and GPT4o [2] (OpenAI 2023a), we analyze LLaMA 3-8B and LLaMA
3-70B models (Touvron et al. 2023).
1gpt-3.5-turbo-0125
2gpt-4o-2024-05-13
-----
|Col1|GSM8K SVAMP Multi-Arith AQuA MATH|
|---|---|
|GPT-3.5 GPT-4o CoT LLaMA 3-8B LLaMA 3-70B|0.78 0.34 0.86 0.30 0.99 0.09 0.74 0.34 0.57 0.45 ± ± ± ± ± 0.97 ± 0.18 0.96 ± 0.18 1.00 ± 0.00 0.87 ± 0.30 0.79 ± 0.38 0.63 0.38 0.78 0.32 0.98 0.05 0.61 0.30 0.32 0.41 ± ± ± ± ± 0.87 0.29 0.91 0.23 1.00 0.01 0.60 0.41 0.41 0.43 ± ± ± ± ±|
|---|---|
|GPT-3.5 GPT-4o Auto CoT LLaMA 3-8B LLaMA 3-70B|0.71 0.35 0.90 0.27 0.92 0.22 0.76 0.35 0.53 0.45 ± ± ± ± ± 0.97 ± 0.18 0.96 ± 0.17 1.00 ± 0.00 0.86 ± 0.30 0.80 ± 0.37 0.67 0.36 0.85 0.28 1.00 0.02 0.62 0.31 0.19 0.28 ± ± ± ± ± 0.92 ± 0.22 0.93 ± 0.21 1.00 ± 0.00 0.79 ± 0.33 0.49 ± 0.45|
|---|---|
|GPT-3.5 GPT-4o Zero-Shot CoT LLaMA 3-8B LLaMA 3-70B|0.91 0.23 0.91 0.25 1.00 0.03 0.80 0.30 0.48 0.46 ± ± ± ± ± 0.93 0.25 0.93 0.24 1.00 0.01 0.81 0.36 0.61 0.46 ± ± ± ± ± 0.57 0.36 0.75 0.30 0.87 0.19 0.25 0.26 0.14 0.26 ± ± ± ± ± 0.81 0.33 0.85 0.25 0.98 0.06 0.26 0.34 0.19 0.33 ± ± ± ± ±|
|---|---|
|GPT-3.5 GPT-4o Complex CoT LLaMA 3-8B LLaMA 3-70B|0.90 0.26 0.86 0.31 1.00 0.03 0.51 0.41 0.54 0.45 ± ± ± ± ± 0.96 ± 0.20 0.95 ± 0.20 1.00 ± 0.00 0.80 ± 0.37 0.63 ± 0.46 0.58 0.43 0.80 0.33 0.99 0.03 0.68 0.30 0.27 0.38 ± ± ± ± ± 0.92 ± 0.28 0.91 ± 0.26 1.00 ± 0.00 0.83 ± 0.30 0.47 ± 0.45|
|---|---|
|GPT-3.5 GPT-4o Self-Consistency CoT LLaMA 3-8B LLaMA 3-70B|0.65 0.39 0.80 0.39 0.97 0.15 0.71 0.43 0.27 0.41 ± ± ± ± ± 0.96 ± 0.20 0.95 ± 0.20 1.00 ± 0.00 0.87 ± 0.32 0.81 ± 0.38 0.60 ± 0.38 0.83 ± 0.31 1.00 ± 0.00 0.69 ± 0.33 0.28 ± 0.41 0.88 ± 0.32 0.94 ± 0.22 1.00 ± 0.00 0.65 ± 0.43 0.44 ± 0.45|
|---|---|
|GPT-3.5 GPT-4o PAL LLaMA 3-8B LLaMA 3-70B|0.77 0.38 0.80 0.38 0.93 0.26 0.40 0.44 N/a ± ± ± ± 0.96 0.19 0.96 0.19 1.00 0.01 0.52 0.47 N/a ± ± ± ± 0.57 0.40 0.74 0.36 0.91 0.22 0.24 0.35 N/a ± ± ± ± 0.82 0.34 0.85 0.30 0.95 0.18 0.35 0.44 N/a ± ± ± ±|
|---|---|
|GPT-3.5 GPT-4o PoT LLaMA 3-8B LLaMA 3-70B|0.75 0.44 0.87 0.33 0.99 0.10 0.50 0.50 N/a ± ± ± ± 0.83 ± 0.38 0.80 ± 0.40 1.00 ± 0.00 0.66 ± 0.47 N/a 0.54 0.50 0.58 0.49 0.42 0.49 0.31 0.46 N/a ± ± ± ± 0.82 0.38 0.87 0.33 0.70 0.46 0.58 0.49 N/a ± ± ± ±|
|---|---|
Table 2: Performance comparison based on the pass@3 metric. The best performance for each dataset is highlighted in bold,
while the second-best performance is underlined.
majority voting, or incorporating additional post-processing
steps to improve the results. To obtain a focused and practicable benchmark, we concentrate on the core concepts
without implementing these variations. Figure 2 gives an
overview of the different methods, highlighting the primary
ideas and the main differences in the technical implementation, e.g. the use of examples in the prompts or the need for
iterative optimization. It is important to note that iterative
optimization requires more computational resources.
**Performance Metrics**
To evaluate the results, we use the pass@k metric (Chen
et al. 2021), which measures the probability that the correct answer is generated at least once when the model is
prompted k times. Chen et al. (2021) introduced an unbiased estimator for this metric by prompting the model
_n > k times, which helps mitigate the impact of the inherent_
stochasticity in language models. Given c as the number of
correct responses, the metric is calculated as follows:
When using k = 1, we report the averaged accuracy of
the algorithms on solving mathematical problems. With a
larger k, we can study the robustness of different methods.
Beyond performance in successfully solving the task, we
also compare the resource consumption. For open-source
models (LLaMA), we track elapsed time, while for closedsource models (GPT), we monitor the costs of API calls.
**Experimental Results**
To compare the different methods on the various foundational models, we conduct an extensive analysis over five
datasets. In the following we describe the outcomes of our
study. The code for our experiments can be found here [3].
**Experimental setup**
We integrated the original implementations of the papers
into our repository if code is provided by the paper. Similarly, we took all the original few-shot examples as far as
the papers provide. For Auto CoT and Complex CoT, we
followed their instructions to generate the few-shot samples
3https://github.com/kathrinse/Math-Reasoning-Benchmark
_n−k_ _c_
_n_
_k_
pass@k := ESamples
(1)
1 −
-----
|Col1|Col2|Col3|
|---|---|---|
||||
||||
|||70B|
|||8B|
|Col1|GPT|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||33..55||
||||33..55|
|||44oo||
LLaMA 3 GPT
1.0 1.0
0.9 0.9
Pass@3 0.8 0.8
0.7 0.7
0.6 70B 0.6 3.5
8B 4o
1 sec 10 sec 1 min 0.1 1 10
Time per Sample Costs (Dollars for 1K samples)
CoT Auto CoT Complex CoT Self-C. CoT ZeroShot CoT PAL PoT
Figure 3: Trade-off between performance and computational costs on the GSM8K dataset. The y-axis represents the pass@3
and the x-axis the computational costs. On the left side, the LLaMA foundation models are shown, based on the elapsed
computation time, and on the right side, the GPT foundation models are compared based on the costs for the API calls.
accordingly, and for both we manually added the missing
reasoning paths for the samples. The Self-Consistency approach always used the same prompt as the vanilla CoT.
For the smaller models, we generated n = 10 results per
sample to compute the pass@3. The larger models (GPT-4o
and LLaMA 3 - 70B) were stable enough to decrease n = 5
for cost and resource savings. For Complex CoT, we generated 5 solutions and applied the majority voting on the 3
most complex ones. For the Self-Consistency CoT approach,
we always took the majority of 10 repeated runs.
The questions in the MATH dataset range from algebraic
up to geometric problems, often requiring more complex results than mere floating point numbers. Therefore, the methods relying on external engines to perform the main calculus, were not suitable for this advanced kind of problems.
The LLaMA models were executed locally on one (for
8B) resp. two (for 70B) NVIDIA A100 80GB GPUs, both
ran in half-precision (float16). The batch size was equal to n,
except for Self-Consistency and Complex CoT, where it was
equal to the number of runs for the majority voting. Concrete
prompts and more details can be found in Appendix.
**Robust Performance**
In this section, we examine the influence of various prompt
strategies, foundation models, and datasets on the pass@3
metric, which represents robust performance. Table 2 lists
the comparison results. Besides the pass@3 metric, we also
report the accuracy reached by all models in Appendix.
**Foundation Models.** When comparing different foundation models, we see that algorithms incorporating GPT-4o
generally achieve the best performance. Moreover, LLaMA
3-70B and GPT-3.5 are always in the second and third
places, respectively, highlighting the connection between
model capability and size. It is worth noting that the strong
performance of LLaMA 3-70B is even more robust, while
the outcomes of GPT-3.5 rely more on the quality of the
prompt. For instance, on the dataset GSM8K, LLaMA 370B consistently achieves an accuracy of above 81%, while
GPT-3.5 ranges from 65% (Self-Consistency CoT) to 91%
(Zero-Shot CoT). A smaller model, LLaMA 3-8B, also fluctuates more depending on the concrete method applied.
**Prompt Strategies.** Different foundation models favor
different prompt strategies. As previously mentioned, GPT4o combined with CoT outperforms other algorithms, while
for both LLaMA 3 models, the Auto CoT approach yields
the largest performance boost. For GPT-3.5, Zero-Shot CoT
demonstrates the best performance, suggesting that additional shots may not benefit the model but need clarification.
Instead, Zero-Shot CoT calls the foundation model twice per
sample—first for reasoning and second for the final prediction—which appears to increase the stability of the final result. In contrast, all other models call the foundation models only once for reasoning. Despite this, the costs and time
consumption of Zero-Shot CoT remain comparable due to
its concise prompt. Additionally, the two methods using the
external engine are less favorable in performance than other
CoT-based methods, especially on the AQuA datasets. Here,
the computation executed by the external engine is not optimal for the advanced reasoning required and the multiple
choice structure of this dataset poses an additional challenge.
**Datasets.** The Multi-Arith dataset raises less difficulties;
even a small LLaMA 3 model exceeds 80% pass@3, while
larger models can achieve near-perfect performance. Therefore, the differences between prompting strategies are negligible. The only exception is PoT, which, in its original code
base, implements a zero-shot code generation approach that
performs poorly across all foundation models.
In contrast, the AQuA dataset, due to its college-level difficulty, presents a greater challenge. Its multiple-choice format causes problems for code-generated solutions, making
them unsuitable for this dataset. Here, the choice of prompting strategies significantly impacts the largest LLaMA 3
model, with the Zero-Shot CoT approach producing inferior results compared to GPT-4 using Complex CoT. GPT3.5 also shows more variability in its performance.
In general, the quality of the foundation model has a more
significant impact on performance. GPT-4o and LLaMA 370B consistently achieve over 80%, regardless of prompt
-----
0.9 0.9
Algebra 0.76 0.50 0.43 0.27 0.15 Algebra 0.90 0.73 0.60 0.42 0.26
0.8 0.8
Counting & 0.61 0.35 0.25 0.15 0.12 Counting & 0.74 0.49 0.33 0.25 0.20
Probability 0.7 Probability 0.7
Geometry 0.46 0.35 0.27 0.18 0.09 0.6 Geometry 0.67 0.55 0.42 0.29 0.14 0.6
Intermediate 0.49 0.30 0.21 0.14 0.06 0.5 Intermediate 0.61 0.42 0.26 0.22 0.09 0.5
Algebra Algebra
0.4 0.4
Number 0.45 0.36 0.26 0.13 0.08 Number 0.76 0.58 0.43 0.32 0.15
Theory Theory
0.3 0.3
Prealgebra 0.69 0.60 0.46 0.36 0.18 Prealgebra 0.87 0.78 0.69 0.57 0.38
0.2 0.2
Precalculus 0.56 0.31 0.13 0.08 0.05 0.1 Precalculus 0.69 0.41 0.28 0.19 0.10 0.1
Level 1 Level 2 Level 3 Level 4 Level 5 Level 1 Level 2 Level 3 Level 4 Level 5
Figure 4: Detailed analysis of the pass@3 metric using CoT separated by the different question categories and levels in the
MATH dataset. On the left side, the LLaMA 3-8B results are shown, on the right side the outcomes using LLaMA 3-70B.
ing strategy, on all datasets except for MATH. Here, even
GPT-4o ranges from merely 61% up to 81%. LLaMA 370B achieves only 21% pass@3 applying Zero-Shot CoT
and performs best employing the Auto CoT strategy with
49%. There is a significant performance difference among
all foundation models, except between LLaMA 3-70B and
GPT-3.5. For a detailed ANOVA analysis, see Appendix.
**Efficiency**
Beyond performance, computational cost is an important
factor to consider in practice. In this section, we examine the
trade-off between performance and required resources. Figure 3 illustrates a complete comparison of different prompt
strategies on GSM8K. From the results on LLaMA 3, we
see that larger model 70B achieves much higher result in
pass@3, but also costs more time. For instance, Auto CoT
with LLaMA 3-70B has a computation time of 10 seconds per sample, while LLaMA 3-8B takes only 1 second
per sample. Comparing different prompt strategies, Auto
CoT has the best trade-off between performance and costs,
whereas Self-Consistency CoT is the most computationally
costly method due to its multiple refinement runs.
When using GPT as our foundation model, the computational resources are measured by the costs in US Dollars for
each sample. Similar to the findings from LLaMA 3 models, a larger model (GPT-4o) performs better than a smaller
one (GPT-3.5) but is more computationally costly. However,
GPT-3.5 with Zero-Shot CoT achieves a good trade-off, as
its performance reaches almost the same level as GPT-4o
on GSM8K, but its costs are approximately 30 times less.
Auto CoT with GPT-4o increases the accuracy of pass@3
by four percentages compared to GPT-3.5 with Zero-Shot
CoT, but it also requires approximately 20 times more US
Dollars. Therefore, GPT-3.5 with Zero-Shot CoT obtains the
best trade-off, highlighting its potential for practical uses.
LLaMA 3 with Auto CoT consistently shows its advantages in high efficiency and performance on the other four
datasets. GPT-3.5 with Zero-Shot CoT is also an optimal
option except for the dataset MATH, where GPT-4o with
Auto CoT reveals a significant performance improvement.
Detailed illustrations can be found in Appendix.
**Proficiency**
The MATH dataset consists of questions from different domains (e.g. Algebra or Geometry) and levels (from 1 to 5).
Figure 4 gives an comprehensive overview of the performance of the LLaMA 3 models using the CoT prompting
strategy. According to this analysis the level 1 questions in
Algebra and Prealgebra can be solved frequently, of the time
by the all models. As also shown in Figure 4 the 70B model
reaches an higher performance for all domains compared to
8B model. On the other hand, the level 4 and 5 questions
pose a challenge for both models with less then 30% correct
answers for nearly all domains. This differentiation shows
the strength of the MATH dataset, allowing for a detailed
analysis, and also highlight the need for further research in
more complex and higher-level mathematical settings.
Figure 1 qualitatively compares the answers generated by
all models for a question in the Algebra domain of level 1
and Figure 5 analyses a level 5 question for both LLaMA
3 models. The 8B model solved the level 1 task correctly
but struggled with the more complex level 5 question. Concretely, the answers from the 8B and 70B models are the
same for the level 1 question, and they are highly consistent with the true answer. However, the challenge for the 8B
model to answer the question of level 5 is to correctly identify the first step, converting the question into an inequality.
Another common failure of the small model is the limitation
in calculations. In this case, LLaMA 3-8B wrongly calculates the value of g(x) for x = 8. The 70B model improves
the ability to understand the question and mathematical calculation, leading to a correct answer.
Both GPT models yield in general a high proficiency over
all types and levels. As shown in Appendix, GPT-3.5 reaches
in level 5 between 11% and 52% pass@3, and GPT-4o even
surpasses 30% across all question types. However, it also
requires a greater amount of tokens for its comprehensive
argumentation path, associated with higher costs.
-----
ing significant savings in computational overhead while providing competent performance for simpler tasks. Similarly,
LLaMA 3-8B excels on less demanding tasks despite its
smaller number of parameters, although its utility decreases
with increasing task complexity due to stability issues.
Our analysis further shows that there is no consistently
superior prompting strategy for all datasets and models. Extended reasoning strategies appear to be particularly beneficial for improving the performance of smaller models,
while the differences between strategies for robust models
such as GPT-4o and LLaMA 3-70B are relatively small. For
the LLaMA models, Auto CoT provides an optimal balance
between cost and performance, while Zero-Shot CoT has
shown promising results on several tasks.
Despite these advances, there is still room for improvement in the performance of mathematical reasoning tasks
at school and university level. For example, while GPT-4o
achieves over 80% accuracy in the AQuA dataset, human
performance is typically 85% according to (Zhong et al.
2023). In the MATH dataset, the maximum accuracy of
GPT-4o is 75%, while PhD students achieve between 40%
and 90% (Hendrycks et al. 2021). These results suggest
that further progress in model training and reasoning skills
is needed to close this gap, especially for tasks requiring
deeper cognitive abilities and complex problem-solving.
These findings reveal important practical implications for
real-world applications. In an educational context, GPT-3.5
can serve as a cost-effective solution for lower grades, while
the more advanced GPT-4o model is better suited for high
school mathematics. For data-sensitive applications, such as
finance, LLaMA 3 offers a privacy-preserving and powerful open-source alternative. Furthermore, when implementing LLMs to solve these tasks, users might encounter issues
such as occasional errors in reasoning and challenges in extracting relevant information from very long responses.
**Conclusion**
Our benchmarking study has shown that the choice of base
model has a greater impact on the performance of LLMs on
mathematical reasoning tasks than specific prompting strategies. Our results further suggest that while larger models
such as GPT-4o and LLaMA 3-70B provide robust performance on simpler tasks, they present challenges in terms of
scalability and efficiency, especially for complex problems
like in higher education contexts. For cost-effective applications, GPT-3.5 and LLaMA 3-8B offer alternatives for less
complex tasks, but show limitations for advanced problems.
**Limitations.** We identify the limited number of foundation models used in our benchmark. Future work should
consider other state-of-the-art open-source models such as
Mistral-AI (Jiang et al. 2023) and closed-source models like
Gemini (Gemini-Team et al. 2023) or Claude (Anthropic
2024). Nevertheless, our codebase has been released and is
easy to use for integrating new foundation models. Moreover, we identified another set of models fine-tuned on mathematical reasoning problems, which we plan to include in
our benchmarking for a more comprehensive evaluation beyond just prompt strategies.
_MATH Dataset - Algebra - Level 5_
**Question: What is the smallest real number x in the**
domain of the function
_g(x) =_ (x − 3)[2] _−_ (x − 8)[2] ?
**True answer: A real number x is in the domain of g**
p
if and only if (x − 3)[2] _−_ (x − 8)[2] _≥_ 0. Expanding this
out and simplifying, we get 10x − 55 ≥ 0; the smallest
solution is x = 10[55] [= 11]
2 [.]
**LLaMA-3-8B: The answer is 8. It is easy to see that**
_g(x) will be 0 when x = 8. Since g(x) will be 0 when_
_x = 8, the minimum value of g(x) will be 0. That is,_
the smallest real number x in the domain of g(x) will
be 8.
**LLaMA-3-70B: Note that the expression under the**
square root must be non-negative, or else the expression will be undefined. So we must have (x − 3)[2] _−_
(x − 8)[2] _≥_ 0. Expanding out the squares, we get
_x[2]_ _−_ 6x + 9 − (x[2] _−_ 16x + 64) ≥ 0, or 10x − 55 ≥ 0.
Therefore x ≥ 5.5, so the smallest possible value of x
is x = 5.5. The answer is 5.5.
Figure 5: Further examples from the MATH dataset. The
question belongs to the category Algebra, level 5, and the
predicted answers were generated by LLaMA 3-8B and
LLaMA 3-70B using CoT approach.
**Discussion**
In our benchmarking study, we provide empirical evidence
that the choice of the foundation model significantly influences the performance of LLMs on mathematical reasoning tasks, overshadowing the impact of specific prompting
strategies. For instance, models like GPT-4o and LLaMA
3-70B show performance enhancements of up to 20% over
smaller models across various datasets, highlighting the crucial role of model architecture and size. These findings align
with previous results, indicating that model size significantly
enhances the performance, and GPT-4 proving to be the
strongest model (Qiao et al. 2023; Luo et al. 2023).
For elementary school tasks, GPT-4o and LLaMA 3-70B
were particularly effective, suggesting that their advanced
architectures are well suited to solving simpler math reasoning tasks. GPT-4o consistently answered over 95% of
the questions in the GSM8K dataset correctly using almost
all methods, albeit with a significant drop to 83% accuracy
for the reasoning program method (PoT). This drop can often be attributed to the verbose output of GPT-4o, which
exceeds typical token constraints and requires a higher token allowance for conclusive answers. This behavior underscores the need to optimize token efficiency in practical applications, especially in situations where computational resources are limited. An illustrative example of this token
constraint and its impact is given in the Appendix.
The main limitations to the use of GPT-4o and LLaMA
3-70B are their high operating costs and runtimes, which
make them less suitable for routine tasks. For cost-sensitive
contexts, GPT-3.5 is a more cost-effective alternative, offer
-----
**References**
Ahn, J.; Verma, R.; Lou, R.; Liu, D.; Zhang, R.; and Yin,
W. 2024. Large Language Models for Mathematical Reasoning: Progresses and Challenges. In Proceedings of the
_18th Conference of the European Chapter of the Association_
_for Computational Linguistics: Student Research Workshop,_
225–237. St. Julian’s, Malta: Association for Computational
Linguistics.
Anthropic. 2024. The Claude 3 Model Family: Opus, Sonnet, Haiku. Claude-3 Model Card.
Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.;
Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell,
A.; et al. 2020. Language models are few-shot learners. Ad_vances in neural information processing systems, 33: 1877–_
1901.
Chen, M.; Tworek, J.; Jun, H.; Yuan, Q.; Pinto, H. P. d. O.;
Kaplan, J.; Edwards, H.; Burda, Y.; Joseph, N.; Brockman,
G.; et al. 2021. Evaluating large language models trained on
code. arXiv preprint arXiv:2107.03374.
Chen, W.; Ma, X.; Wang, X.; and Cohen, W. W. 2023. Program of Thoughts Prompting: Disentangling Computation
from Reasoning for Numerical Reasoning Tasks. Transac_tions on Machine Learning Research._
Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra,
G.; Roberts, A.; Barham, P.; Chung, H. W.; Sutton, C.;
Gehrmann, S.; et al. 2023. Palm: Scaling language modeling with pathways. Journal of Machine Learning Research,
24(240): 1–113.
Cobbe, K.; Kosaraju, V.; Bavarian, M.; Chen, M.; Jun, H.;
Kaiser, L.; Plappert, M.; Tworek, J.; Hilton, J.; Nakano, R.;
et al. 2021. Training verifiers to solve math word problems.
_arXiv preprint arXiv:2110.14168._
Fu, Y.; Peng, H.; Sabharwal, A.; Clark, P.; and Khot, T. 2023.
Complexity-Based Prompting for Multi-step Reasoning. In
_The Eleventh International Conference on Learning Repre-_
_sentations._
Gao, L.; Madaan, A.; Zhou, S.; Alon, U.; Liu, P.; Yang,
Y.; Callan, J.; and Neubig, G. 2023. PAL: program-aided
language models. In International Conference on Machine
_Learning, 10764–10799. PMLR._
Gemini-Team; Anil, R.; Borgeaud, S.; Wu, Y.; Alayrac, J.B.; Yu, J.; Soricut, R.; Schalkwyk, J.; Dai, A. M.; Hauth, A.;
et al. 2023. Gemini: a family of highly capable multimodal
models. arXiv preprint arXiv:2312.11805.
Goyal, T.; Li, J. J.; and Durrett, G. 2022. News summarization and evaluation in the era of gpt-3. arXiv preprint
_arXiv:2209.12356._
Hendrycks, D.; Burns, C.; Kadavath, S.; Arora, A.; Basart,
S.; Tang, E.; Song, D.; and Steinhardt, J. 2021. Measuring Mathematical Problem Solving With the MATH Dataset.
_NeurIPS._
Huang, J.; and Chang, K. C.-C. 2023. Towards Reasoning in
Large Language Models: A Survey. In Findings of the As_sociation for Computational Linguistics: ACL 2023, 1049–_
1065. Toronto, Canada: Association for Computational Linguistics.
Imani, S.; Du, L.; and Shrivastava, H. 2023. MathPrompter:
Mathematical Reasoning using Large Language Models.
In Proceedings of the 61st Annual Meeting of the Asso_ciation for Computational Linguistics (Volume 5: Industry_
_Track), 37–42. Toronto, Canada: Association for Computa-_
tional Linguistics.
Jiang, A. Q.; Sablayrolles, A.; Mensch, A.; Bamford, C.;
Chaplot, D. S.; Casas, D. d. l.; Bressand, F.; Lengyel, G.;
Lample, G.; Saulnier, L.; et al. 2023. Mistral 7B. arXiv
_preprint arXiv:2310.06825._
Jiao, W.; Wang, W.; Huang, J.-t.; Wang, X.; Shi, S.; and Tu,
Z. 2023. Is ChatGPT a good translator? Yes with GPT-4 as
the engine. arXiv preprint arXiv:2301.08745.
Kaddour, J.; Harris, J.; Mozes, M.; Bradley, H.; Raileanu,
R.; and McHardy, R. 2023. Challenges and Applications of
Large Language Models. arXiv:2307.10169.
Kasneci, E.; Seßler, K.; K¨uchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; G¨unnemann,
S.; H¨ullermeier, E.; et al. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. Learning and individual differences, 103: 102274.
Kojima, T.; Gu, S. S.; Reid, M.; Matsuo, Y.; and Iwasawa,
Y. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:
22199–22213.
Korinek, A. 2023. Generative AI for economic research: Use
cases and implications for economists. Journal of Economic
_Literature, 61(4): 1281–1317._
Liang, Z.; Yu, W.; Rajpurohit, T.; Clark, P.; Zhang, X.; and
Kalyan, A. 2023. Let GPT be a Math Tutor: Teaching Math
Word Problem Solvers with Customized Exercise Generation. In The 2023 Conference on Empirical Methods in Nat_ural Language Processing._
Lu, P.; Bansal, H.; Xia, T.; Liu, J.; Li, C.; Hajishirzi, H.;
Cheng, H.; Chang, K.-W.; Galley, M.; and Gao, J. 2024.
MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts. In International Conference
_on Learning Representations (ICLR)._
Lu, P.; Qiu, L.; Yu, W.; Welleck, S.; and Chang, K.-W. 2023.
A Survey of Deep Learning for Mathematical Reasoning.
In Proceedings of the 61st Annual Meeting of the Associa_tion for Computational Linguistics (Volume 1: Long Papers),_
14605–14631. Toronto, Canada: Association for Computational Linguistics.
Luo, H.; Sun, Q.; Xu, C.; Zhao, P.; Lou, J.; Tao, C.; Geng,
X.; Lin, Q.; Chen, S.; and Zhang, D. 2023. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct. arXiv preprint
_arXiv:2308.09583._
OpenAI. 2023a. GPT-4 Technical Report. arXiv preprint
_arXiv:2303.08774._
OpenAI. 2023b. GPT-4V(ision) system card.
Qiao, S.; Ou, Y.; Zhang, N.; Chen, X.; Yao, Y.; Deng, S.;
Tan, C.; Huang, F.; and Chen, H. 2023. Reasoning with Language Model Prompting: A Survey. In Proceedings of the
_61st Annual Meeting of the Association for Computational_
-----
**Appendix**
**Proficiency Analysis**
To further analyze the proficiency of GPT models on the
MATH dataset, we compare the results across different
levels and question types in Figures 6 and 7. GPT-3.5
demonstrates relatively high performance in Algebra and
Pre-algebra questions, but for the remaining categories, its
pass@3 metric falls below 50% at Levels 4 and 5. In contrast, GPT-4 shows strong performance across all types and
levels, with only Level 5 questions in Geometry, Intermediate Algebra, and Pre-calculus scoring below 50% pass@3.
0.9
Algebra 0.99 0.90 0.85 0.75 0.50
0.8
Counting & 0.90 0.74 0.62 0.48 0.34
Probability 0.7
Geometry 0.69 0.71 0.62 0.43 0.14 0.6
Intermediate 0.81 0.61 0.42 0.28 0.11 0.5
Algebra
0.4
Number 0.95 0.78 0.63 0.43 0.31
Theory
0.3
Prealgebra 0.90 0.88 0.82 0.70 0.52
0.2
Precalculus 0.83 0.63 0.40 0.24 0.11 0.1
Level 1 Level 2 Level 3 Level 4 Level 5
Figure 6: Analysis MATH GPT-3.5. Results of pass@3 for
each question category are shown.
0.9
Algebra 1.00 0.96 0.97 0.95 0.88
0.8
Counting & 0.97 0.94 0.86 0.83 0.65
Probability 0.7
Geometry 0.77 0.79 0.85 0.66 0.45 0.6
Intermediate 0.94 0.86 0.77 0.67 0.35 0.5
Algebra
0.4
Number 1.00 0.98 0.89 0.88 0.76
Theory
0.3
Prealgebra 0.95 0.94 0.92 0.83 0.79
0.2
Precalculus 0.91 0.78 0.69 0.51 0.30 0.1
Level 1 Level 2 Level 3 Level 4 Level 5
Figure 7: Analysis MATH GPT-4o. Results of pass@3 for
each question category are shown.
This proficiency can be observed in the following example of a Level 5 question from the Algebra category of
the MATH dataset. Both GPT models successfully solve the
problem, though GPT-4o again demonstrates its talkative nature requiring many tokens to arrive at the final result.
_Linguistics (Volume 1: Long Papers), 5368–5393. Toronto,_
Canada: Association for Computational Linguistics.
Thoppilan, R.; De Freitas, D.; Hall, J.; Shazeer, N.; Kulshreshtha, A.; Cheng, H.-T.; Jin, A.; Bos, T.; Baker, L.; Du,
Y.; et al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239.
Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux,
M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.;
Azhar, F.; Rodriguez, A.; Joulin, A.; Grave, E.; and Lample,
G. 2023. LLaMA: Open and Efficient Foundation Language
Models. arXiv:2302.13971.
Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones,
L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information pro_cessing systems, 30._
Wang, H.; Xin, H.; Zheng, C.; Liu, Z.; Cao, Q.; Huang, Y.;
Xiong, J.; Shi, H.; Xie, E.; Yin, J.; Li, Z.; and Liang, X.
2024. LEGO-Prover: Neural Theorem Proving with Growing Libraries. In The Twelfth International Conference on
_Learning Representations._
Wang, X.; Wei, J.; Schuurmans, D.; Le, Q. V.; Chi, E. H.;
Narang, S.; Chowdhery, A.; and Zhou, D. 2023. SelfConsistency Improves Chain of Thought Reasoning in Language Models. In The Eleventh International Conference on
_Learning Representations._
Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.;
Chi, E.; Le, Q. V.; Zhou, D.; et al. 2022. Chain-ofthought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35:
24824–24837.
Yuan, Z.; Yuan, H.; Tan, C.; Wang, W.; and Huang, S. 2023.
How well do Large Language Models perform in Arithmetic
tasks? arXiv preprint arXiv:2304.02015.
Zhang, T.; Ladhak, F.; Durmus, E.; Liang, P.; McKeown, K.;
and Hashimoto, T. B. 2024. Benchmarking large language
models for news summarization. Transactions of the Asso_ciation for Computational Linguistics, 12: 39–57._
Zhang, Z.; Zhang, A.; Li, M.; and Smola, A. 2023. Automatic Chain of Thought Prompting in Large Language Models. In The Eleventh International Conference on Learning
_Representations._
Zhong, W.; Cui, R.; Guo, Y.; Liang, Y.; Lu, S.; Wang, Y.;
Saied, A.; Chen, W.; and Duan, N. 2023. Agieval: A humancentric benchmark for evaluating foundation models. arXiv
_preprint arXiv:2304.06364._
-----
**Restricted number of tokens**
Across all datasets and methods, the GPT-4 model struggles with some questions, not due to poor reasoning quality, but because it requires too many tokens for its argumentation. The following example from the GSM8K
dataset demonstrates how the model’s detailed thinking leads to exceeding the token limit before arriving at an answer when using the PoT method. For
comparison, we have also included the GPT-3.5 output.
**Question: Sasha notices that prices for lumber have gone**
up 50% in the last few months after she bought some lumber. Since she has leftovers, she decides to sell them. She
has leftover ten 2 x 4 x 10 boards that cost her $10 each.
She also has five 4 x 4 x 10 boards she bought for $16
each. If she sells them all, how much profit does she make?
**True answer: How much did the 2 x 4s cost? ** She**
bought the 2 x 4s for $100 because 10 x 10 = 100
How much did the 4 x 4s cost? ** She bought the 4 x 4s
for 80 because 5 x 16 = 80
How much did the lumber cost? ** She paid $180 for the
lumber because 100 + 80 = 180
How much does the same number now sell for? ** The
same number now sells for $270 because 180 x 1.5 = 270
How much did she make in profit? ** She made $90 in
profit because 270 - 180 = 90
**GPT-3.5: Sasha bought ten 2 x 4 x 10 boards for $10 each,**
so the total cost was 10 x $10 = $100. She also bought five
4 x 4 x 10 boards for $16 each, so the total cost was 5 x
$16 = $80. The total cost of all the lumber she bought was
$100 + $80 = $180.
If lumber prices have gone up 50%, she can sell the lumber
for 150% of what she paid. 150% of $180 is $180 x 1.5 =
$270.
Therefore, Sasha makes a profit of $270 - $180 = $90 when
she sells all the lumber. The answer is $90.
**GPT-4o:**
# Initial costs
cost per 2x4x10 = 10
cost per 4x4x10 = 16
# Quantities
num 2x4x10 = 10
num 4x4x10 = 5
# Total initial costs
total cost 2x4x10 = cost per 2x4x10 * num 2x4x10
total cost 4x4x10 = cost per 4x4x10 * num 4x4x10
total initial cost = total cost 2x4x10 + total cost 4x4x10
# Price increase
price increase rate = 1.50
# Selling prices
selling price per 2x4x10 = cost per 2x4x10 *
price increase rate
selling price per 4x4x10 = cost per 4x4x10 *
price increase rate
# Total selling prices
total selling price 2x4x10 = selling price per 2x4x10 *
num 2x4x10
total selling price 4x4x10 = selling price per 4x4x10
_MATH Dataset - Algebra - Level 5_
**Question: What is the smallest real number x in the**
domain of the function
_g(x) =_ (x − 3)[2] _−_ (x − 8)[2] ?
**True answer: A real numberp** _x is in the domain of g_
if and only if (x − 3)[2] _−_ (x − 8)[2] _≥_ 0. Expanding this
out and simplifying, we get 10x − 55 ≥ 0; the smallest
solution is x = [55]10 [= 11]
2 [.]
**GPT-3.5: To find the smallest real number x in the**
domain of the function, we need to ensure that the
expression inside the square root is non-negative.
Expanding the expression gives us
_x[2]_ _−_ 6x + 9 − (x[2] _−_ 16x + 64) = _√−10x + 55._
For the expression under the square root to be non-p
negative, we must have −10x + 55 ≥ 0.
Solving this inequality gives us x ≤ 10[55] [= 5][.][5][.]
Therefore, the smallest real number x in the domain of
the function is 5.5 .
**GPT-4o: To find the smallest real number x in the**
domain of the function g(x) = (x − 3)[2] _−_ (x − 8)[2],
we need to ensure the expression inside the square
p
root is non-negative because the square root function is
only defined for non-negative values.
First, simplify the expression inside the square root:
(x _−_ 3)[2] _−_ (x _−_ 8)[2] Using the identity for the difference
of squares, a[2] _−_ _b[2]_ = (a − _b)(a + b), let a = x −_ 3 and
_b = x −_ 8:
(x−3)[2] _−(x−8)[2]_ = [(x−3)−(x−8)][(x−3)+(x−8)]
Calculate each part separately:
(x − 3) − (x − 8) = x − 3 − _x + 8 = 5_
(x − 3) + (x − 8) = x − 3 + x − 8 = 2x − 11
Thus, we have:
(x − 3)[2] _−_ (x − 8)[2] = 5(2x − 11)
We now substitute back into the function g(x):
_g(x) =_ 5(2x − 11)
For g(x) to be defined, the expression inside thep
square root must be non-negative:
5(2x − 11) ≥ 0
Solving for x:
2x − 11 ≥ 0
2x ≥ 11
_x_
_≥_ [11]2
So, the smallest real number x in the domain of the
function g(x) is: [11]
2
-----
**Statistical analysis**
To test our hypothesis that the differences between the foundation models are significant, while between the methods
they are not, we conduct an ANOVA analysis on the results
of the MATH dataset. The analysis for the foundation models yields a p-value of 1.9e-5, indicating a highly significant
difference between them. In contrast, the analysis for the
methods resulted in a p-value of 0.72, suggesting that the
differences between methods are not statistically significant.
We also conduct a more detailed analysis of the differences between the foundation models. As illustrated in Figure 8, the results reveal that the difference between GPT-3.5
and LLaMA 3-70B is the only comparison that is not statistically significant. This suggests that while most foundation models show significant performance variations, GPT3.5 and LLaMA 3-70B are more comparable in their results
on the MATH dataset. This finding further emphasizes the
importance of choosing the right foundational model, as it
has a substantial impact on performance.
1.0
method. For the SVAMP dataset, the results are similarly
strong, with pass@3 rates consistently exceeding 70%. Although GPT-4o and LLaMA 3-70B slightly outperform the
other models, the margin is small.
In contrast, the results for the more advanced, collegelevel AQuA dataset are more varied. Interestingly, GPT-3.5
with Zero-Shot CoT nearly matches the best performance
of GPT-4o while being significantly more resource-efficient.
For LLaMA 3-70B, AutoCoT and Complex CoT deliver the
best results.
The MATH dataset reveals the largest disparity between
open-source and closed-source models. While LLaMA 3
struggles to surpass the 50% mark, GPT-4o consistently exceeds 60%, and even GPT-3.5 achieves between 50% and
60% pass@3.
**Few-shot examples**
For reproducibility, we include all few-shot examples used
in the benchmarking survey, which are also available in
our GitHub repository. Merely Zero-Shot CoT did not use
any specific examples but only the prompt Let’s think
step by step. For CoT, we used the prompts from the
original paper (Wei et al. 2022), and the same prompts were
used for Self-Consistency, see Figures 13 and 14. For MATH
dataset we used the first 8 samples from the according training dataset.
As suggested by the original paper, we extracted the samples for the Auto CoT method using a clustering approach
(Zhang et al. 2023). All used samples can be found in Figures 22, 23, 24, 25, 26 and 27. The Complex CoT approach
extracts the “most complex” samples from the training data,
to guide the reasoning process (Fu et al. 2023). We report our
extracted samples in Figures 28, 29, 30, 31, 32 and 33. Due
to limited GPU space, we used only 6 shots for the MATH
dataset in the Complex CoT approach.
For PAL, we use the prompts suggested by the original
paper (Gao et al. 2023), as seen in Figure 15 and 16. PoT
applies different prompts for all datasets (Chen et al. 2023),
they can found in Figures 17, 18, 19, 20 and 21. For MultiArith dataset, the original implementation only offered a
Zero-Shot Approach, which we applied.
GPT-3.5
0.8
0.6
GPT-4o
LLaMA 3-70B
0.4
0.2
LLaMA 3-8B
|1.000|0.007|0.342|0.006|
|---|---|---|---|
|0.007|1.000|0.002|0.000|
|0.342|0.002|1.000|0.037|
|0.006|0.000|0.037|1.000|
Figure 8: p-values of the pairwise t-test comparison between
the pass@3 results of the foundation models across all
methods on the MATH dataset.
**Accuracy results**
Table 3 lists all accuracy results, corresponding to the
pass@1 metric. The distribution of these results is similar
to the pass@3 metric. GPT-4o with Self-Consistency CoT
consistently outperforms all other methods and foundational
models.
**Visualized Efficiency Comparison**
To balance performance and resource consumption, we
present the results across all datasets visually in Figures 9,
10, 11 and 12. On the smallest dataset, Multiarith, which
is at the grade school level, nearly all methods and models
consistently achieve over 90% pass@3. Even the smaller
LLaMA 3-8B model proves to be an efficient and costeffective option, particularly when used with the AutoCoT
-----
|Col1|GSM8K SVAMP Multi-Arith AQuA MATH|
|---|---|
|GPT-3.5 GPT-4o CoT LLaMA 3-8B LLaMA 3-70B|0.60 0.35 0.73 0.34 0.95 0.15 0.54 0.36 0.41 0.40 ± ± ± ± ± 0.93 ± 0.21 0.94 ± 0.22 1.00 ± 0.00 0.75 ± 0.34 0.71 ± 0.40 0.39 0.31 0.56 0.33 0.83 0.17 0.31 0.23 0.14 0.23 ± ± ± ± ± 0.67 0.32 0.75 0.28 0.92 0.10 0.34 0.30 0.23 0.29 ± ± ± ± ±|
|---|---|
|GPT-3.5 GPT-4o Auto CoT LLaMA 3-8B LLaMA 3-70B|0.47 0.33 0.83 0.31 0.77 0.27 0.58 0.37 0.36 0.38 ± ± ± ± ± 0.88 0.24 0.94 0.21 0.99 0.04 0.65 0.33 0.70 0.39 ± ± ± ± ± 0.43 0.32 0.67 0.33 0.90 0.13 0.32 0.23 0.08 0.14 ± ± ± ± ± 0.78 0.30 0.84 0.26 0.99 0.05 0.53 0.33 0.31 0.35 ± ± ± ± ±|
|---|---|
|GPT-3.5 GPT-4o ZeroShot CoT LLaMA 3-8B LLaMA 3-70B|0.82 0.29 0.82 0.30 0.97 0.10 0.60 0.35 0.34 0.39 ± ± ± ± ± 0.79 0.31 0.91 0.26 0.97 0.08 0.63 0.36 0.50 0.43 ± ± ± ± ± 0.31 0.25 0.46 0.28 0.56 0.23 0.10 0.11 0.06 0.14 ± ± ± ± ± 0.55 0.32 0.59 0.29 0.80 0.17 0.10 0.16 0.08 0.18 ± ± ± ± ±|
|---|---|
|GPT-3.5 GPT-4o Complex CoT LLaMA 3-8B LLaMA 3-70B|0.81 0.31 0.78 0.35 0.97 0.10 0.32 0.32 0.38 0.40 ± ± ± ± ± 0.89 0.25 0.94 0.22 0.80 0.00 0.59 0.31 0.55 0.46 ± ± ± ± ± 0.36 0.34 0.64 0.36 0.93 0.13 0.38 0.26 0.14 0.24 ± ± ± ± ± 0.74 0.32 0.82 0.31 0.99 0.04 0.59 0.34 0.31 0.37 ± ± ± ± ±|
|---|---|
|GPT-3.5 GPT-4o SelfConsistency CoT LLaMA 3-8B LLaMA 3-70B|0.34 0.22 0.77 0.40 0.96 0.17 0.66 0.44 0.19 0.33 ± ± ± ± ± 0.95 ± 0.21 0.95 ± 0.21 1.00 ± 0.00 0.81 ± 0.35 0.76 ± 0.39 0.31 0.21 0.72 0.37 1.00 0.01 0.45 0.33 0.18 0.31 ± ± ± ± ± 0.82 0.35 0.89 0.26 1.00 0.01 0.46 0.38 0.30 0.37 ± ± ± ± ±|
|---|---|
|GPT-3.5 GPT-4o PAL LLaMA 3-8B LLaMA 3-70B|0.68 0.40 0.76 0.40 0.92 0.27 0.30 0.40 N/a ± ± ± ± 0.88 0.25 0.89 0.23 0.90 0.15 0.36 0.37 N/a ± ± ± ± 0.36 0.33 0.55 0.36 0.70 0.26 0.13 0.23 N/a ± ± ± ± 0.65 0.36 0.72 0.35 0.88 0.23 0.22 0.33 N/a ± ± ± ±|
|---|---|
|GPT-3.5 GPT-4o PoT LLaMA 3-8B LLaMA 3-70B|0.75 0.44 0.87 0.33 0.99 0.10 0.50 0.50 N/a ± ± ± ± 0.83 ± 0.38 0.80 ± 0.40 1.00 ± 0.00 0.66 ± 0.47 N/a 0.54 0.50 0.58 0.49 0.42 0.49 0.31 0.46 N/a ± ± ± ± 0.82 0.38 0.87 0.33 0.70 0.46 0.58 0.49 N/a ± ± ± ±|
|---|---|
Table 3: Accuracy results of all methods, models and datasets. The best performance for each dataset is highlighted in bold,
while the second-best performance is underlined.
|Col1|Col2|Col3|
|---|---|---|
||||
||||
|||70B 8B|
||||
|Col1|Col2|GPT|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|||||33..55 44oo|
|||||33..55 44oo|
||||||
||||||
LLaMA 3 GPT
1.0 1.0
0.9 0.9
0.8 0.8
Pass@3
0.7 0.7
70B 3.5
0.6 8B 0.6 4o
1 sec 10 sec 1 min 0.1 1 10
Time per Sample Costs (Dollars for 1K samples)
CoT Auto CoT Complex CoT Self-C. CoT ZeroShot CoT PAL PoT
Figure 9: pass@3 results on the SVAMP dataset. On the left side the LLaMA foundation models are shown, based on the
elapsed computation time and on the right side the GPT foundation models are compared based on the costs for the API calls.
-----
|Col1|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
||||70B 8B|
|Col1|Col2|GPT|Col4|
|---|---|---|---|
|||||
|||||
|||||
|||||
|||||
||||33..55 44oo|
LLaMA 3 GPT
1.0 1.0
0.9 0.9
0.8 0.8
0.7 0.7
Pass@3
0.6 0.6
0.5 70B 0.5 3.5
8B 4o
0.4 0.4
1 sec 10 sec 1 min 0.1 1 10
Time per Sample Costs (Dollars for 1K samples)
CoT Auto CoT Complex CoT Self-C. CoT ZeroShot CoT PAL PoT
Figure 10: pass@3 results on the Multi-Arith dataset. On the left side the LLaMA foundation models are shown, based on the
elapsed computation time and on the right side the GPT foundation models are compared based on the costs for the API calls.
|Col1|Col2|70B 8B|Col4|Col5|
|---|---|---|---|---|
||||||
||||||
||||||
|Col1|GPT|Col3|
|---|---|---|
||||
||||
||||
|||33..55 44oo|
LLaMA 3 GPT
1.0 1.0
70B
8B
0.8 0.8
0.6 0.6
Pass@3
0.4 0.4
3.5
4o
1 sec 10 sec 1 min 0.1 1 10
Time per Sample Costs (Dollars for 1K samples)
CoT Auto CoT Complex CoT Self-C. CoT ZeroShot CoT PAL PoT
Figure 11: pass@3 results on the AQuA dataset. On the left side the LLaMA foundation models are shown, based on the
elapsed computation time and on the right side the GPT foundation models are compared based on the costs for the API calls.
|LLaMA 3|Col2|Col3|
|---|---|---|
||||
||||
||||
||||
||||
|||70B|
||8B||
LLaMA 3 GPT
0.8 0.8
0.7 0.7
0.6 0.6
0.5 0.5
Pass@3
0.4 0.4
0.3 0.3
70B 3.5
0.2 0.2
8B 4o
1 sec 10 sec 40 sec 0.3 1 10 100
Time per Sample Costs (Dollars for 1K samples)
CoT Auto CoT Complex CoT Self-C. CoT ZeroShot CoT
Figure 12: pass@3 results on the MATH dataset. On the left side the LLaMA foundation models are shown, based on the
elapsed computation time and on the right side the GPT foundation models are compared based on the costs for the API calls.
-----
**Q: There are 15 trees in the grove. Grove workers will plant**
trees in the grove today. After they are done, there will be 21
trees. How many trees did the grove workers plant today?
**A: There are 15 trees originally. Then there were 21 trees**
after some more were planted. So there must have been 21 15 = 6. The answer is 6.
**Q: If there are 3 cars in the parking lot and 2 more**
cars arrive, how many cars are in the parking lot?
**A: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5.**
The answer is 5.
**Q: Leah had 32 chocolates and her sister had 42. If**
they ate 35, how many pieces do they have left in total?// A:
Originally, Leah had 32 chocolates. Her sister had 42. So in
total they had 32 + 42 = 74. After eating 35, they had 74 35 = 39. The answer is 39.
**Q: Jason had 20 lollipops. He gave Denny some lol-**
lipops. Now Jason has 12 lollipops. How many lollipops did
Jason give to Denny?
**A: Jason started with 20 lollipops. Then he had 12 after**
giving some to Denny. So he gave Denny 20 - 12 = 8. The
answer is 8.
**Q: Shawn has five toys. For Christmas, he got two**
toys each from his mom and dad. How many toys does he
have now?
**A: Shawn started with 5 toys. If he got 2 toys each from**
his mom and dad, then that is 4 more toys. 5 + 4 = 9. The
answer is 9.
**Q: There were nine computers in the server room.**
Five more computers were installed each day, from monday
to thursday. How many computers are now in the server
room?
**A: There were originally 9 computers. For each of 4 days, 5**
more computers were added. So 5 * 4 = 20 computers were
added. 9 + 20 is 29. The answer is 29.
**Q: Michael had 58 golf balls. On tuesday, he lost 23**
golf balls. On wednesday, he lost 2 more. How many golf
balls did he have at the end of wednesday?
**A: Michael started with 58 golf balls. After losing 23 on**
tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35
- 2 = 33 golf balls. The answer is 33.
**Q: Olivia has $23. She bought five bagels for $3 each. How**
much money does she have left?
**A: Olivia had 23 dollars. 5 bagels for 3 dollars each will be**
5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is
8. The answer is 8.
**Q: {question}.**
**A:**
Figure 13: 8-Shot Prompt for GSM8K, SVAMP and MultiArith dataset from Wei et al. (2022) for CoT and SelfConsistency CoT.
**Q: John found that the average of 15 numbers is 40. If 10 is**
added to each number then the mean of the numbers is?
**Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64**
**A: If 10 is added to each number, then the mean of the**
numbers also increases by 10. So the new mean would be 50.
The answer is (a).
**Q: If a / b = 3/4 and 8a + 5b = 22,then find the value**
of a.
**Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2**
**A: If a / b = 3/4, then b = 4a / 3. So 8a + 5(4a / 3) = 22. This**
simplifies to 8a + 20a / 3 = 22, which means 44a / 3 = 22. So
a is equal to 3/2. The answer is (b).
**Q: A person is traveling at 20 km/hr and reached his**
destiny in 2.5 hr then find the distance?// Answer Choices:
(a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
**A: The distance that the person traveled would have been 20**
km/hr * 2.5 hrs = 50 km. The answer is (e).
**Q: How many keystrokes are needed to type the num-**
bers from 1 to 500?
**Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e)**
1788
**A: There are 9 one-digit numbers from 1 to 9. There are 90**
two-digit numbers from 10 to 99. There are 401 three-digit
numbers from 100 to 500. 9 + 90(2) + 401(3) = 1392. The
answer is (b).
**Q: {question}.**
**Answer Choices: {answer choices}.**
**A:**
Figure 14: 4-Shot Prompt for AQuA dataset from Wei et al.
(2022) for CoT and Self-Consistency CoT.
-----
**#Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?**
money_initial = 23
bagels = 5
bagel_cost = 3
money_spent = bagels * bagel_cost
money_left = money_initial - money_spent
print(money_left)
**#Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at**
the end of wednesday?
golf_balls_initial = 58
golf_balls_lost_tuesday = 23
golf_balls_lost_wednesday = 2
golf_balls_left = golf_balls_initial - golf_balls_lost_tuesday - golf_balls_lost_wednesday
print(golf_balls_left)
**#Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How**
many computers are now in the server room?
computers_initial = 9
computers_per_day = 5
num_days = 4 # 4 days between monday and
# thursday
computers_added = computers_per_day - [num_days]
computers_total = computers_initial + computers_added
print(computers_total)
**#Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?**
cars_initial = 3
cars_arrived = 2
total_cars = cars_initial + cars_arrived
print(total_cars)
**#Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?**
leah_chocolates = 32
sister_chocolates = 42
total_chocolates = leah_chocolates + sister_chocolates
chocolates_eaten = 35
chocolates_left = total_chocolates - chocolates_eaten
print(chocolates_left)
**#Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees.**
How many trees did the grove workers plant today?
trees_initial = 15
trees_after = 21
trees_added = trees_after - trees_initial
print(trees_added)
**#Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?**
toys_initial = 5
mom_toys = 2
dad_toys = 2
total_received = mom_toys + dad_toys
total_toys = toys_initial + total_received
print(total_toys)
**#Q: {question}.**
Figure 15: 8-Shot Prompt for GSM8K, SVAMP, Multi-Arith dataset from Gao et al. (2023) for PAL.
-----
**Q: John found that the average of 15 numbers is 40. If 10 is**
added to each number then the mean of the numbers is?
**Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64**
# solution in Python:
def solution():
"""Question: How many keystrokes are
needed to type the numbers from 1
to 500?
Answer Choices:
(a) 1156 (b) 1392 (c) 1480
(d) 1562 (e) 1788
"""
count_one_digit = 9
count_two_digit = 90
count_three_digit = 401
total_keystrokes = count_one_digit
+ count_two_digit * 2
+ count_three_digit * 3
result = total_keystrokes
return result
**Q: A person is traveling at 20 km/hr and reached his destiny**
in 2.5 hr then find the distance?
**Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km**
(e) 50 km
# solution in Python:
def solution():
"""Question: A person is traveling at
20 km/hr and reached his destiny
in 2.5 hr then find the distance?
Answer Choices:
(a) 53 km (b) 55 km (c) 52 km
(d) 60 km (e) 50 km
"""
speed_km_hr = 20
time_hr = 2.5
distance_km = speed_km_hr * time_hr
result = distance_km
return result
**Q: If a / b = 3/4 and 8a + 5b = 22,then find the value of a.**
**Answer Choices: (a) 1/2 (b) 3/2 (c) 5/2 (d) 4/2 (e) 7/2**
# solution in Python:
def solution():
"""Question: If a / b = 3/4 and
8a + 5b = 22,then find the value of a.
Answer Choices:
(a) 1/2 (b) 3/2 (c) 5/2
(d) 4/2 (e) 7/2
"""
a_b = 3/4
b = 22 / (8 * a_b + 5)
a = a_b * b
result = a
return result
**Q: John found that the average of 15 numbers is 40. If 10 is**
added to each number then the mean of the numbers is?
**Answer Choices: (a) 50 (b) 45 (c) 65 (d) 78 (e) 64**
# solution in Python:
def solution():
"""Question: John found that the
average of 15 numbers is 40. If
10 is added to each number then
the mean of the numbers is?
Answer Choices:
(a) 50 (b) 45 (c) 65 (d) 78 (e) 64
"""
mean = 40
numbers = 15
added_per_number = 10
sum = mean * numbers
new_sum = sum
+ (added_per_number * numbers)
new_mean = new_sum / numbers
result = new_mean
return result
**Q: {question}**
**Answer Choices: {answer choices}**
# solution in Python:
Figure 16: 4-Shot Prompt for AQuA dataset from Gao et al.
(2023) for PAL.
-----
Question: Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day
with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck egg. How much in dollars does she make every
day at the farmers’ market?
# Python code, return ans
total_eggs = 16
eaten_eggs = 3
baked_eggs = 4
sold_eggs = total_eggs - eaten_eggs - baked_eggs
dollars_per_egg = 2
ans = sold_eggs * dollars_per_egg
Question: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?
# Python code, return ans
bolts_of_blue_fiber = 2
bolts_of_white_fiber = num_of_blue_fiber / 2
ans = bolts_of_blue_fiber + bolts_of_white_fiber
Question: Josh decides to try flipping a house. He buys a house for $80,000 and then puts in $50,000 in repairs. This increased the
value of the house by 150%. How much profit did he make?
# Python code, return ans
cost_of_original_house = 80000
increase_rate = 150 / 100
value_of_house = (1 + increase_rate) * cost_of_original_house
cost_of_repair = 50000
ans = value_of_house - cost_of_repair - cost_of_original_house
Question: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds, mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In the morning, she gives her flock of
chickens 15 cups of feed. In the afternoon, she gives her chickens another 25 cups of feed. How many cups of feed does she need to
give her chickens in the final meal of the day if the size of Wendi’s flock is 20 chickens?
# Python code, return ans
numb_of_chickens = 20
cups_for_each_chicken = 3
cups_for_all_chicken = num_of_chickens * cups_for_each_chicken
cups_in_the_morning = 15
cups_in_the_afternoon = 25
ans = cups_for_all_chicken - cups_in_the_morning - cups_in_the_afternoon
Question: Kylar went to the store to buy glasses for his new apartment. One glass costs $5, but every second glass costs only 60% of
the price. Kylar wants to buy 16 glasses. How much does he need to pay for them?
# Python code, return ans
num_glasses = 16
first_glass_cost = 5
second_glass_cost = 5 * 0.6
ans = 0
for i in range(num_glasses):
if i % 2 == 0:
ans += first_glass_cost
else:
ans += second_glass_cost
Figure 17: 8-Shot Prompt for GSM8K dataset from Chen et al. (2023) for PoT - Part 1.
-----
Question: Marissa is hiking a 12-mile trail. She took 1 hour to walk the first 4 miles, then another hour to walk the next two miles. If
she wants her average speed to be 4 miles per hour, what speed (in miles per hour) does she need to walk the remaining distance?
# Python code, return ans
average_mile_per_hour = 4
total_trail_miles = 12
remaining_miles = total_trail_miles - 4 - 2
total_hours = total_trail_miles / average_mile_per_hour
remaining_hours = total_hours - 2
ans = remaining_miles / remaining_hours
Question: Carlos is planting a lemon tree. The tree will cost $90 to plant. Each year it will grow 7 lemons, which he can sell for $1.5
each. It costs $3 a year to water and feed the tree. How many years will it take before he starts earning money on the lemon tree?
# Python code, return ans
total_cost = 90
cost_of_watering_and_feeding = 3
cost_of_each_lemon = 1.5
num_of_lemon_per_year = 7
ans = 0
while total_cost > 0:
total_cost += cost_of_watering_and_feeding
total_cost -= num_of_lemon_per_year * cost_of_each_lemon
ans += 1
Question: When Freda cooks canned tomatoes into sauce, they lose half their volume. Each 16 ounce can of tomatoes that she uses
contains three tomatoes. Freda’s last batch of tomato sauce made 32 ounces of sauce. How many tomatoes did Freda use?
# Python code, return ans
lose_rate = 0.5
num_tomato_contained_in_per_ounce_sauce = 3 / 16
ounce_sauce_in_last_batch = 32
num_tomato_in_last_batch = ounce_sauce_in_last_batch
- [num_tomato_contained_in_per_ounce_sauce]
ans = num_tomato_in_last_batch / (1 - lose_rate)
Question: Jordan wanted to surprise her mom with a homemade birthday cake. From reading the instructions, she knew it would take
20 minutes to make the cake batter and 30 minutes to bake the cake. The cake would require 2 hours to cool and an additional 10
minutes to frost the cake. If she plans to make the cake all on the same day, what is the latest time of day that Jordan can start making
the cake to be ready to serve it at 5:00 pm?
# Python code, return ans
minutes_to_make_batter = 20
minutes_to_bake_cake = 30
minutes_to_cool_cake = 2 * 60
minutes_to_frost_cake = 10
total_minutes = minutes_to_make_batter + minutes_to_bake_cake
+ minutes_to_cool_cake + minutes_to_frost_cake
total_hours = total_minutes / 60
ans = 5 - total_hours
Question: question
# Python code, return ans
Figure 18: 8-Shot Prompt for GSM8K dataset from Chen et al. (2023) for PoT - Part 2.
-----
Read the following passages to answer questions with Python code, store the result as a ’ans’ variable:
# Passage: James bought 93 red and 10 blue stickers, he used 31 red sticker on his fridge
and 7 blue stickers on his laptop.
# Question: How many red stickers does James have?
original_red_stickers = 93
used_red_stickers = 31
ans = original_red_stickers - used_red_stickers
# Passage: Allen went to supermarket to buy eggs, each egg costs 80 dollars, if the
discount is 29 dollars.
# Question: How much do you have to pay to buy for each egg?
original_egg_price_in_dollars = 80
discount_dollars = 29
ans = original_egg_price_in_dollars - discount_dollars
# Passage: Dianna collects both cases and books. He bought 22 cases and 5 books from the
store. Now he has 57 cases and 25 books.
# Question: How many books did danny have at first?
num_books_bought_at_store = 5
num_books_now = 25
ans = num_books_now - num_books_bought_at_store
# Passage: There were 108 chickens and 20 sheeps at the farm, some of chickens and sheeps
were sold. There are 87 chickens and 18 sheeps left now.
# Question: How many chickens were sold?
num_chicken_before = 108
num_chicken_now = 87
ans = num_chicken_before - num_chicken_now
# Passage: Katty scored 2 goals on monday, 8 goals on tuesday and 9 goals on wednesday.
# Question: How many did Katty score on monday and wednesday?
num_goals_on_monday = 2
num_goals_on_wednesday = 9
ans = num_goals_on_monday + num_goals_on_wednesday
# Passage: There are 5 girls and 4 boys in the Masquerade, 12 more girls and 7 more boys
joined.
# Question: How many more girls than boys are in the Masquerade?
num_girls_before = 5
num_girls_joined = 12
num_boys_before = 4
num_boys_joined = 7
total_girls = num_girls_before + num_girls_joined
total_boys = num_boys_before + num_boys_joined
ans = total_girls - total_boys
# Passage: Joseph and Getty went to buy ice creams, they together bought 36 ice creams.
On the way back, Joseph ate 12 of the ice creasm, and he has 2 ice creams left now.
# Question: How much ice creasm did Getty purchase?
num_ice_creams_bought_by_joseph = 2 + 12
total_ice_creams = 36
ans = total_ice_creams - num_ice_creams_bought_by_joseph
# Passage: {body}
# Question: {question}
Figure 19: 7-Shot Prompt for SVAMP dataset from Chen et al. (2023) for PoT.
-----
from sympy import Symbol
from sympy import simplify
import math
from sympy import solve
# solve(equations, variable): solving the equations and return the variable value.
# Example 1: In a flight of 600 km, an aircraft was slowed down due to bad weather. Its
average speed for the trip was reduced by 200 km/hr and the time of flight increased by
30 minutes. The duration of the flight is:
# Answer option: [’A)1 hour’, ’B)2 hours’, ’C)3 hours’, ’D)4 hours’, ’E)5 hours’]
# Solution 1 as Python code, returns ans
duration = Symbol(’duration’, positive=True)
delay = 30 / 60
total_disntace = 600
original_speed = total_disntace / duration
reduced_speed = total_disntace / (duration + delay)
ans = solve(original_speed - reduced_speed - 200, duration, dict=True)
# Example 2: M men agree to purchase a gift for Rs. D. If 3 men drop out how much more
will each have to contribute towards the purchase of the gift?
# Answer options: [’A)D/(M-3)’, ’B)MD/3’, ’C)M/(D-3)’, ’D)3D/(M2-3M)’, ’E)None of these’]
# Solution 2 as Python code, returns ans
M = Symbol(’M’)
D = Symbol(’D’)
cost_before_dropout = D / M
cost_after_dropout = D / (M - 3)
ans=simplify(cost_after_dropout - cost_before_dropout)
# Example 3: A sum of money at simple interest amounts to Rs. 815 in 3 years and to Rs.
854 in 4 years. The sum is:
# Answer option: [’A)Rs. 650’, ’B)Rs. 690’, ’C)Rs. 698’, ’D)Rs. 700’, ’E)None of these’]
# Solution 3 as Python code, returns ans
deposit = Symbol(’deposit’, positive=True)
interest = Symbol(’interest’, positive=True)
money_in_3_years = deposit + 3 * interest
money_in_4_years = deposit + 4 * interest
solution = solve([money_in_3_years - 815, money_in_4_years - 854], [deposit, interest],
dict=True)
ans = solution[deposit]
# Example 4: Find out which of the following values is the multiple of X, if it is
divisible by 9 and 12?
# Answer option: [’A)36’, ’B)15’, ’C)17’, ’D)5’, ’E)7’]
# Solution 4 as Python code, returns ans
options = [36, 15, 17, 5, 7]
for option in options:
if option % 9 == 0 and option % 12 == 0:
ans = option
break
Figure 20: 7-Shot Prompt for AQuA dataset from Chen et al. (2023) for PoT - Part 1.
-----
# Example 5: 35% of the employees of a company are men. 60% of the men in the company
speak French and 40% of the employees of the company speak French. What is % of the
women in the company who do not speak French?
# Answer option: [’A)4%’, ’B)10%’, ’C)96%’, ’D)90.12%’, ’E)70.77%’]
# Solution 5 as Python code, returns ans
num_women = 65
men_speaking_french = 0.6 * 35
employees_speaking_french = 0.4 * 100
women_speaking_french = employees_speaking_french - men_speaking_french
women_not_speaking_french=num_women - women_speaking_french
ans = women_not_speaking_french / num_women
# Example 6: In one hour, a boat goes 11 km/hr along the stream and 5 km/hr against the
stream. The speed of the boat in still water (in km/hr) is:
# Answer option: [’A)4 kmph’, ’B)5 kmph’, ’C)6 kmph’, ’D)7 kmph’, ’E)8 kmph’]
# Solution 6 as Python code, returns ans
boat_speed = Symbol(’boat_speed’, positive=True)
stream_speed = Symbol(’stream_speed’, positive=True)
along_stream_speed = 11
against_stream_speed = 5
ans = solve([boat_speed + stream_speed - along_stream_speed, boat_speed - stream_speed
- against_stream_speed], [boat_speed, stream_speed], dict=True)
# Example 7: The difference between simple interest and C.I. at the same rate for Rs.5000
for 2 years in Rs.72. The rate of interest is?
# Answer option: [’A)10%’, ’B)12%’, ’C)6%’, ’D)8%’, ’E)4%’]
# Solution 7 as Python code, returns ans
interest_rate = Symbol(’interest_rate’, positive=True)
amount = 5000
amount_with_simple_interest = amount * (1 + 2 * interest_rate / 100)
amount_with_compound_interest = amount * (1 + interest_rate / 100) ** 2
solution = solve(amount_with_compound_interest - amount_with_simple_interest - 72,
interest_rate, dict=True)
ans = solution[interest_rate]
# Example 8: The area of a rectangle is 15 square centimeters and the perimeter is 16
centimeters. What are the dimensions of the rectangle?
# Answer option: [’A)2&4’, ’B)3&5’, ’C)4&6’, ’D)5&7’, ’E)6&8’]
# Solution 8 as Python code, returns ans
width = Symbol(’width’, positive=True)
height = Symbol(’height’, positive=True)
area = 15
permimeter = 16
ans = solve([width * height - area, 2 * (width + height) - permimeter], [width, height],
dict=True)
# Question 9: {question}
# Answer option: {answer_options}
# Write Python Code to solve question 9. Store your result as a variable named ’ans’.
Figure 21: 7-Shot Prompt for AQuA dataset from Chen et al. (2023) for PoT - Part 2.
-----
Q: Joan wants to visit her family who live 480 miles away. If she drives at a rate of 60 mph and takes a lunch break taking 30
minutes, and 2 bathroom breaks taking 15 minutes each, how many hours did it take her to get there?
A: Let’s think step by step. The driving time is 480/60= _<<480/60=8>>8_ hours The time for breaks was
30+15+15=<<30+15+15=60>>60 minutes So she spent 60/60=<<60/60=1>>1 hour for rest stops. So it took her
8+1=<<8+1=9>>9 hours Final answer: 9 The answer is 9.
Q: At the bookstore, Sarah bought 6 paperback books and 4 hardback books. Her brother bought one-third as many paperback books as Sarah bought, and two times the number of hardback books that she bought. How many books did her brother buy in
total?
A: Let’s think step by step. Her brother bought 6/3=<<6/3=2>>2 paperbacks He bought 4*2=<<4*2=8>>8 hardbacks In total
he bought 2+8=<<2+8=10>>10 books Final answer: 10 The answer is 10.
Q: Gina is participating in a neighborhood clean-up filling bags with litter. She was able to collect two bags of litter herself.
The rest of the neighborhood together collected 82 times as much as Gina did by herself. Each bag of litter weighs 4 pounds. How
many pounds was all the litter they collected?
A: Let’s think step by step. The rest of the neighborhood collected 82 * 2 = <<82*2=164>>164 bags of litter. With Gina’s bags,
they had 164 + 2 = <<164+2=166>>166 bags of litter. Thus, the litter weighed 166 * 4 = <<166*4=664>>664 pounds. Final
answer: 664 The answer is 664.
Q: John’s neighbor tells him to walk his dog for 1 hour each day for a total of $10. He does this for April, save for the 4
Sundays in April. He later spent $50 on books and gave his sister Kaylee the same amount. How much money did John have left?
A: Let’s think step by step. April has 30 days, so if he didn’t walk the dog on Sundays, he worked for a total of 30 days - 4 days =
_<<30-4=26>>26 days. If each day he earned $10, he ends up with 26 days * $10/day = $<<26*10=260>>260 . After buying_
books and giving his sister $50, he ends up with $260 - $50 - $50 = $<<260-50-50=160>>160 Final answer: 160 The answer is 160.
Q: Three years ago, Bethany was twice the age of her younger sister. In 5 years, her younger sister will be 16. How old is
Bethany now?
A: Let’s think step by step. First find how old Bethany’s sister currently is: 16 years - 5 years = <<16-5=11>>11 years Then find
how old her sister was 3 years ago: 11 years - 3 years = <<11-3=8>>8 years Then double that age to find how old Bethany was
three years ago: 8 years * 2 = <<8*2=16>>16 years Then add 3 years to that number to find how old Bethany is now: 16 years +
3 years = <<16+3=19>>19 years Final answer: 19 The answer is 19.
Q: Pat is having a picnic with her family. She has 42 cookies. She also has 63 pieces of candy and 21 brownies. There are
7 people in her family. If each person gets the same number of each dessert, how much will each person get?
A: Let’s think step by step. Each person gets 42 cookies / 7 people = <<42/7=6>>6 cookies. Each person gets 63 pieces of candy /
7 people = <<63/7=9>>9 pieces of candy. Each person gets 21 brownies / 7 people = <<21/7=3>>3 brownies. Each person will
get 6 + 9 + 3 = <<6+9+3=18>>18 desserts. Final answer: 18 The answer is 18.
Q: Nate went out to do his daily exercises. In 8 seconds, he ran a distance equal to four times the length of a football field.
He rested for a while, then ran 500 more meters. If the field’s length is 168 meters, how far did Nate ran?
A: Let’s think step by step. In the 8 seconds, Nate ran 4*168 = 672 meters. The total distance Nate ran is 672+500 =
_<<672+500=1172>>1172 meters. Final answer: 1172 The answer is 1172._
Q: Monica is a teacher. She has 6 classes per day. The first class has 20 students. The second and third classes have 25 students. Her fourth class has half as many as her first class. Her fifth and sixth classes have 28 students. How many students does
Monica see each day?
A: Let’s think step by step. Monica has 20+25+25+28+28=<<20+25+25+28+28=126>>126 in 5 of her classes. In the fourth
class, she has 20/2=<<20/2=10>>10 students. In total, she has 126+10=<<126+10=136>>136 students per day. Final answer:
136 The answer is 136.
Q: {question}
A: Let’s think step by step.
Figure 22: 8-Shot Prompt for GSM8K dataset for Auto CoT.
-----
Q: Wendy uploaded 45 pictures to Facebook. She put 27 pics into one album and put the rest into 9 different albums. How many
pictures were in each album?
A: Let’s think step by step. Let’s think step by step. First, we know that Wendy uploaded 45 pictures in total. Second, we know that
Wendy put 27 pictures into one album. That means that Wendy put the remaining 18 pictures into 9 different albums. That means
that each album would have 2 pictures. The answer is 2.
Q: For Halloween Katie and her sister combined the candy they received. Katie had 8 pieces of candy while her sister had
23. If they ate 8 pieces the first night, how many pieces do they have left?
A: Let’s think step by step. Let’s think step by step. Katie and her sister have a total of 8 + 23 = 31 pieces of candy. If they eat 8
pieces the first night, they have 31 - 8 = 23 pieces left. The answer is 23.
Q: Bianca was organizing her book case making sure each of the shelves had exactly 8 books on it. If she had 5 shelves of
mystery books and 4 shelves of picture books, how many books did she have total?
A: Let’s think step by step. Let’s think step by step. There are 5 shelves of mystery books. Each shelf has 8 books. So that’s 40 mystery
books. There are 4 shelves of picture books. Each shelf has 8 books. So that’s 32 picture books. 40 + 32 = 72 books. The answer is 72.
Q: A pet store had 78 puppies. In one day they sold 30 of them and put the rest into cages with 8 in each cage. How many
cages did they use?
A: Let’s think step by step. Let’s think step by step. There are 78 puppies. 30 are sold, so that means there are 48 left. 48 divided by
8 is 6, so that means there are 6 cages with 8 puppies in each. The answer is 6.
Q: A waiter had 14 customers to wait on. If 3 customers left and he got another 39 customers, how many customers would
he have?
A: Let’s think step by step. Let’s think step by step. The waiter had 14 customers to wait on. If 3 customers left, that means he would
have 11 customers left. If he got another 39 customers, that means he would have 50 customers in total. The answer is 50.
Q: A trivia team had 7 members total, but during a game 2 members didn’t show up. If each member that did show up
scored 4 points, how many points were scored total?
A: Let’s think step by step. Let’s think step by step. There were 7 members on the team, but 2 members didn’t show up. That means
that there were 5 members that did show up. Each member that showed up scored 4 points. So if 5 members each scored 4 points,
then the total number of points scored would be 5*4=20. The answer is 20.
Q: Gwen had 18 math problems and 11 science problems for homework. If she finished 24 of the problems at school, how
many problems did she have to do for homework?
A: Let’s think step by step. Let’s think step by step. Gwen had 18 math problems and 11 science problems for homework. That
means she had a total of 29 problems for homework. If she finished 24 of the problems at school, that means she had 5 problems left
to do for homework. The answer is 5.
Q: Mike made 69 dollars mowing lawns over the summer. If he spent 24 dollars buying new mower blades, how many 5
dollar games could he buy with the money he had left?
A: Let’s think step by step. Let’s think step by step. Mike made $69 from mowing lawns. He spent $24 on new mower blades. That
means he has $45 left. Each game costs $5, so he could buy 9 games. The answer is 9.
Q: {question}
A: Let’s think step by step.
Figure 23: 8-Shot Prompt for Multi-Arith dataset for Auto CoT.
-----
Q: Baker made 61 pastries and 167 cakes. If he sold 108 cakes and 44 pastries How many cakes would baker still have?
A: Let’s think step by step. The baker has 167 in the beginning. If he sells 108, he has 167 - 108 = 59 left. The answer is 59. The
answer is 59.0.
Q: Julia played tag with some kids on monday. She played tag with 14 kids on tuesday. If she played with a total of 16
kids, How many kids did she play with on monday?
A: Let’s think step by step. She played with 16 kids in total, with 14 on Tuesday. Therefore, on Monday she played with 16 - 14 = 2
kids. The answer is 2. The answer is 2.0.
Q: Paul had 108 books. After selling some books in a garage sale and giving 35 books to his friend he had 62 books left.
How many books did he sell in the garage sale?
A: Let’s think step by step. Paul had 108 books in the beginning. He gave 35 away and has 62 left. Therefore, he sold 108 - 35 - 62
= 11 in the garagae. The answer is 11. The answer is 11.0.
Q: Tiffany was collecting cans for recycling. On monday she had 8 bags of cans. The next day she found 7 more bags
worth of cans. How many more bags did she have on monday more than she found on the next day?
A: Let’s think step by step. One the first day she found 8, on the second 7. Therefore, she found 8 - 7 = 1 bags more on Monday. The
answer is 1. The answer is 1.0.
Q: Jake has 84 more apples and 10 fewer peaches than Steven. Steven has 13 peaches and 52 apples. How many peaches
does Jake have?
A: Let’s think step by step. Steven has 13 peaches. Since Jake has 10 fewer then Steven, he hast 13 - 10 = 3 peaches. The answer is
3. The answer is 3.0.
Q: 5 storks and 3 birds were sitting on the fence. 4 more birds came to join them. How many more birds than storks are
sitting on the fence?
A: Let’s think step by step. On the fence are siting 3 + 4 = 7 birds. Therefore, there are 7 - 5 = 2 birds more then strokes. The answer
is 2. The answer is 2.0.
Q: Bobby had 21 pieces of candy. He ate 5 pieces of candy. Then he ate 9 more. How many pieces of candy does he still
have left?
A: Let’s think step by step. If Bobby ate 5 + 9 = 14 pieces of candy, he has 21 - 14 = 7 pieces left. The answer is 7. The answer is 7.0.
Q: The grasshopper, the frog and the mouse had a jumping contest. The grasshopper jumped 21 inches. The grasshopper
jumped 25 inches farther than the frog and the mouse jumped 16 inches lesser than the frog. How much farther did the grasshopper
jump than the mouse?
A: Let’s think step by step. The grasshopper jumped 25 inches farther then the frog. The frog jumped 16 inches farther then the
mouse. Therefore, the grasshopper jumped 25 + 16 = 41 inches farther then the mouse. The answer is 41. The answer is 41.0.
Q: {question}
A: Let’s think step by step.
Figure 24: 8-Shot Prompt for SVAMP dataset for Auto CoT.
-----
Q: The train 160 m long is running with a speed of 80 km/hr. In what time will it pass a man running in same direction with the
speed of 8 km/hr.
A)16 seconds
B)8 seconds
C)15 seconds
D)6 seconds
E)28 seconds
A: Let’s think step by step. Man and train are running in same direction, Relative Speed = 80 - 8 = 72 kmph. = (72 * 5)/18 = 20
m/sec Length of the train = 160 m. Time taken to pass the man = 160/20 = 8 seconds. Answer: Option B The answer is B.
Q: 1, 9, 25, 49, 81, ?
A)100
B)121
C)144
D)169
E)171
A: Let’s think step by step. The series consists of squares of consecutive prime numbers. Answer : B. The answer is B.
Q: A shopkeeper expects a gain of 22.5% on his cost price. If in a week, his sale was of Rs. 490, what was his profit?
A)s. 64
B)s. 69
C)s.72
D)s.75
E)s.90
A: Let’s think step by step. C.P. = Rs. (100/122.5)x490 = Rs. (1000/1225)x490 = Rs. 400 Profit = Rs. (490 - 400) = Rs. 90. Answer:E
The answer is E.
Q: Students in Class I and II of a school are in the ratio of 3 : 5. Had 15 students leave the school from each class, the
ratio would have become 1 : 2. How many total students were there in the beginning?
A)120
B)64
C)96
D)80
E)None
A: Let’s think step by step. class 1 have 3x student and class 2 have 5x student. 3x-15/5x-15=1/2,6x-30=5x-15,x=15 45+75=120
student answer A The answer is A.
Q: {question}
A: Let’s think step by step.
Figure 25: 4-Shot Prompt for AQuA dataset for Auto CoT.
-----
Q: A triangle has side lengths of 8, 15 and 17 units. What is the area of the triangle, in square units?
A: Let’s think step by step. We see that 8[2] + 15[2] = 64 + 225 = 289 = 17[2]. So the triangle is a right triangle with legs 8 and 15,
and thus its area is
8(15)
= 60
2
. Final answer: 60 The answer is 60.
Q: A line is parameterized by
_x_ 1 2
= + t _._
y 1 −3
A second line is parameterized by
_x_ 5 4
= + u _._
y −9 2
Find the point where the lines intersect.
A: Let’s think step by step. For the first line,
_x_ 1 2 1 + 2t
= + t = _._
y 1 −3 1 − 3t
For the second line,
_x_ 5 4 5 + 4u
= + u = _._
y −9 2 −9 + 2u
Hence, 1 + 2t = 5 + 4u and 1 − 3t = −9 + 2u. Solving, we find t = 3 and u = [1]2 _[,][ so]_
_x_ 7
= _._
y −8
7 7
. Final answer: The answer is .
−8 −8
Q: If f (x) = 8x[3] _−_ 6x[2] _−_ 4x + 5, find the value of f (−2).
A: Let’s think step by step. We have
_f_ (−2) = 8(−2)[3] _−_ 6(−2)[2] _−_ 4(−2) + 5
= 8(−8) − 6(4) + 8 + 5 = −64 − 24 + 8 + 5 = −75 .
. Final answer: -75 The answer is -75.
Q: What two-digit positive integer is one more than a multiple of 2, 3, 4, 5 and 6?
A: Let’s think step by step. The least common multiple of these integers is 4 · 3 · 5 = 60. Thus, a positive integer that is one more
than a multiple of all the given integers is 60 + 1 = 61 . Final answer: 61 The answer is 61.
Q: Find one third of 5.4, expressed as a simplified fraction.
A: Let’s think step by step. First, we take 5.4 ÷ 3 = 1.8. Then, we convert 1.8 to a fraction. Since 1.8 = [18]10 [, we can simplify this]
fraction to yield 5[9] [. Final answer:][ 9]5 [The answer is][ 9]5 [.]
Figure 26: 4-Shot Prompt for MATH dataset for Auto CoT - Part 1.
-----
Q: In the diagram below, BC is 8 cm. In square centimeters, what is the area of triangle ABC?
[asy]
defaultpen(linewidth(0.7));
draw((0,0)--(16,0)--(23,20)--cycle);
draw((16,0)--(23,0)--(23,20),dashed);
label("8 cm",(8,0),S);
label("10 cm",(23,10),E);
label("$A$",(23,20),N);
label("$B$",(0,0),SW);
label("$C$",(16,0),SE);
[/asy]
A: Let’s think step by step. The area of a triangle is [1]2 [(][base][)(][height][)][, so the area of triangle][ ABC][ is][ 1]2 [(8][ cm][)(][10 cm][) = 40][ square]
centimeters. Final answer: 40 The answer is 40.
Q: A teacher finds that when she offers candy to her class of 30 students, the mean number of pieces taken by each student
is 5. If every student takes some candy, what is the greatest number of pieces one student could have taken?
A: Let’s think step by step. Since there are 30 students and a mean of 5 candies per student, there are a total of 5 · 30 = 150 pieces
of candy. Since every student must take at least one piece of candy, the first 29 students must take a total of at least 29 pieces. Since
150 − 29 = 121, the greatest number of pieces one student can take is 121 . Final answer: 121 The answer is 121.
Q: Find the only real number that can be expressed in the form
(a + bi)[3] _−_ 107i,
where i[2] = −1, and a and b are positive integers.
A: Let’s think step by step. We have
(a + bi)[3] _−_ 107i = (a[3] + 3a[2]bi − 3ab[2] _−_ _b[3]i) −_ 107i
= (a[3] _−_ 3ab[2]) + (3a[2]b − _b[3]_ _−_ 107)i.
If this is a real number, then we must have
0 = 3a[2]b − _b[3]_ _−_ 107
or
107 = b(3a[2] _−_ _b[2])._
Since 107 is prime, either b = 1 or b = 107. If b = 1, then we have 107 = 3a[2] _−_ 1, so a[2] = 36 and a = 6. If b = 107, then we
have 1 = 3a[2] 107[2], so a[2] = [1+107]3 [2] _. But 107[2]_ 2[2] 1 (mod 3), so the right-hand side is not an integer. Thus, (a, b) = (6, 1)
_−_ _≡_ _≡_
is the only possibility. Then the answer is
_a[3]_ _−_ 3ab[2] = 6[3] _−_ 3 · 6 · 1[2] = 198 .
. Final answer: 198 The answer is 198.
Q: {question}
A: Let’s think step by step.
Figure 27: 4-Shot Prompt for MATH dataset for Auto CoT - Part 2.
-----
Q: Tina makes $18.00 an hour. If she works more than 8 hours per shift, she is eligible for overtime, which is paid by your hourly
wage + 1/2 your hourly wage. If she works 10 hours every day for 5 days, how much money does she make?
A: Let’s think step by step. She works 8 hours a day for $18 per hour so she makes 8*18 = <<8*18=144.00>>144.00 per 8-hour
shift She works 10 hours a day and anything over 8 hours is eligible for overtime, so she gets 10-8 = <<10-8=2>>2 hours of
overtime Overtime is calculated as time and a half so and she makes $18/hour so her overtime pay is 18*.5 = <<18*.5=9.00>>9.00
Her overtime pay is 18+9 = <<18+9=27.00>>27.00 Her base pay is $144.00 per 8-hour shift and she works 5 days and makes 5
- $144 = <<144*5=720.00>>720.00 Her overtime pay is $27.00 per hour and she works 2 hours of overtime per day and makes
27*2 = <<27*2=54.00>>54.00 in overtime pay 2 hours of overtime pay for 5 days means she makes 54*5 = $270.00 In 5 days
her base pay is $720.00 and she makes $270.00 in overtime pay so she makes $720 + $270 = <<720+270=990.00>>990.00 Final
answer: 990
Q: Mrs. Snyder used to spend 40% of her monthly income on rent and utilities. Her salary was recently increased by $600
so now her rent and utilities only amount to 25% of her monthly income. How much was her previous monthly income?
A: Let’s think step by step. Let her previous monthly income be p The cost of her rent and utilities was 40% of p which is (40/100)*p
= 2p/5 Her income was increased by $600 so it is now p+$600 The cost of her rent and utilities now amount to 25% of (p+$600)
which is (25/100)*(p+$600) = (p+$600)/4 Equating both expressions for cost of rent and utilities: 2p/5 = (p+$600)/4 Multiplying
both sides of the equation by 20 gives 8p = 5p+$3000 Subtracting 5p from both sides gives: 3p = $3000 Dividing both sides by 3
gives p = $1000 Final answer: 1000
Q: Jesse and Mia are competing in a week long race. They have one week to run 30 miles. On the first three days Jesse
averages (2/3) of a mile. On day four she runs 10 miles. Mia averages 3 miles a day over the first 4 days. What is the average of their
average that they have to run over the final three days?
A: Let’s think step by step. Jesse runs 2 miles in the first three days because 3 x (2/3) = <<3*(2/3)=2>>2 Jesse has 18 miles left
to run because 30 - 10 - 2 = <<30-10-2=18>>18 Jesse has to run an average of 6 miles a day because 18 / 3 = <<18/3=6>>6
Mia runs 12 miles over the first four days because 4 x 3 = <<4*3=12>>12 She has 18 miles left to run because 30 - 12 = <<3012=18>>18 She has to run six miles a day because 18 / 3 = <<18/3=6>>6 The total they both have to run is <<12=12>>12
miles a day The average they have to run per day on average is 6 miles because 12 / 2 = <<12/2=6>>6 Final answer: 6
Q: Janice can type 6 sentences per minute. Today at work, Janice continued working on a paper she started typing yesterday. She typed for 20 minutes, took a break, and typed 15 minutes longer. She then had to erase 40 sentences she had typed
incorrectly. After a meeting, she typed for 18 minutes more. In all, the paper had 536 sentences by the end of today. How many
sentences did she start with today?
A: Let’s think step by step. Janice had X sentences from yesterday already typed to start with today. Janice typed 6 * 20 =
_<<6*20=120>>120 sentences before her break. She typed 6 * 15 = <<6*15=90>>90 sentences after her break. She typed 6 * 18_
= <<6*18=108>>108 sentences after her meeting. Therefore, she typed 120 + 90 + 108 = <<120+90+108=318>>318 sentences
today in all. She had to erase 40 sentences, so she had 318 - 40 = <<318-40=278>>278 sentences that she typed today left. The
paper had X + 278 = 536 sentences at the end of today. Thus, she had X = 536 - 278 = <<536-278=258>>258 sentences typed on
the paper to start with today. Final answer: 258
Q: Lisa and Carly go shopping together. Lisa spends $40 on t-shirts then spends half of this amount on jeans and twice this
amount on coats. Carly spends only a quarter as much as Lisa on t-shirts but spends 3 times as much on jeans and a quarter of the
amount Lisa spent on coats. In dollars, how much did Lisa and Carly spend in total?
A: Let’s think step by step. Lisa spends $40 on t-shirts / 2 = <<40/2=20>>20 on jeans. She also spends $40 on t-shirts * 2
= <<40*2=80>>80 on coats. So Lisa has spent a total of 40 + 20 + 80 = <<40+20+80=140>>140. Carly spends $40 / 4 =
_<<40/4=10>>10 on t-shirts. She also spends $20 per pair of jeans * 3 = <<20*3=60>>60 on jeans. She then also spends $80_
Lisa’s cost for coats / 4 = <<80/4=20>>20 on coats. So Carly has spent a total of 10 + 60 + 20 = <<10+60+20=90>>90. Lisa
and Carly have therefore spent a total of 140 + 90 = <<140+90=230>>230. Final answer: 230
Q: Jim collects model cars, and he has 301 models total. Jim has 4 times as many Buicks as Fords, and 3 more than twice
the number of Fords than Chevys. How many Buicks does Jim have?
A: Let’s think step by step. Let x represent the number of Chevys Fords:3+2x Buicks:4(3+2x)=12+8x Total:x+3+2x+12+8x=301
11x+15=301 11x=286 x=<<26=26>>26 Buicks:12+8(26)=220 Final answer: 220
Figure 28: 8-Shot Prompt for GSM8K dataset for Complex CoT - Part 1.
-----
Q: Janet hires six employees. Four of them are warehouse workers who make $15/hour, and the other two are managers who make
$20/hour. Janet has to pay 10% of her workers’ salaries in FICA taxes. If everyone works 25 days a month and 8 hours a day, how
much does Janet owe total for their wages and taxes for one month?
A: Let’s think step by step. First figure out how many hours each worker works per month by multiplying the number of days
they work by the number of hours a day they work: 25 days * 8 hours/day = <<25*8=200>>200 hours Then calculate how
much one warehouse worker makes per month by multiplying their hourly rate by the number of hours they work: 200 hours *
$15/hour = <<200*15=3000>>3000 Then multiply that number by 4 to find out how much all the warehouse workers make:
$3000/worker * 4 workers = <<3000*4=12000>>12,000 Now multiply the hours each manager works (also 200) by their
hourly wage to find out how much one manager makes per month: 200 hours * $20/hour = <<200*20=4000>>4,000 Now
multiply one manager’s wages by the number of managers (2) to find their total wage amount: $4,000/manager * 2 managers =
_<<4000*2=8000>>8,000 Now add the wages for the managers and the workers to find the total cost of the wages: $8,000 +_
$12,000 = <<8000+12000=20000>>20,000 Now multiply the total wage bill by 10% to find how much the FICA taxes are:
$20,000 * .1 = <<20000*.1=2000>>2,000 Now add the total wage bill to the total tax amount to find the grand total: $2,000 +
$20,000 = <<2000+20000=22000>>22,000 Final answer: 22000
Q: A hay farmer harvested 560 bales of hay from 5 acres of grass per month last year. This year, he planted an additional 7
acres of grass. If the farmer also owns 9 horses and each horse consumes 3 bales of hay a day, how many bales of hay would the
farmer have left by the end of December if he starts feeding them this year’s hay beginning the first day of September?
A: Let’s think step by step. For every acre of grass, the farmer can harvest 560/5 = <<560/5=112>>112 bales of hay each month.
This year, the farmer has 7 + 5 = <<7+5=12>>12 acres of grass. He can expect to harvest 12 x 112 = <<12*112=1344>>1344
bales of hay per month. The total hay production this year is 1344*12 = <<1344*12=16128>>16128 From September to
December, the farmer would have to feed his horses for a total of 30 + 31 + 30 + 31 = <<30+31+30+31=122>>122 days
Each day his horse eats a total of 3*9 = <<3*9=27>>27 bales of hay. For the 122 days, the horses will eat a total of 27*122 =
_<<27*122=3294>>3294 bales. The total number of bales remaining will be 16128-3294 = <<16128-3294=12834>>12834._
Final answer: 12834
Q: {question}
A: Let’s think step by step.
Figure 29: 8-Shot Prompt for GSM8K dataset for Complex CoT - Part 2.
-----
Q: Roger was helping the cafeteria workers pick up lunch trays, but he could only carry 4 trays at a time. If he had to pick up 10
trays from one table and 2 trays from another, how many trips will he make?
A: Let’s think step by step. Roger would need to pick up 10 + 2 = 12 trays, therefore the has to make 12 / 4 = 3 trips. The answer is
3.
Q: Dave was helping the cafeteria workers pick up lunch trays, but he could only carry 9 trays at a time. If he had to pick
up 17 trays from one table and 55 trays from another, how many trips will he make?
A: Let’s think step by step. Dave would need to pick up 17 + 55 = 72 trays, therefore the has to make 72 / 9 = 8 trips. The answer is
8.
Q: Victor was helping the cafeteria workers pick up lunch trays, but he could only carry 7 trays at a time. If he had to pick
up 23 trays from one table and 5 trays from another, how many trips will he make?
A: Let’s think step by step. Victor would need to pick up 23 + 5 = 28 trays, therefore the has to make 28 / 7 = 4 trips. The answer is
4.
Q: A store had 40 coloring books in stock. They ended up putting them on sale and getting rid of 20 of them. The put the
ones they still had onto shelves with 4 on each shelf. How many shelves did they use?
A: Let’s think step by step. After the sale the shop has 40 - 20 = 20 books left. Therefore, they use 20 / 4 = 5 shelves. The answer is
5.
Q: A store had 48 coloring books in stock. They ended up putting them on sale and getting rid of 38 of them. The put the
ones they still had onto shelves with 5 on each shelf. How many shelves did they use?
A: Let’s think step by step. After the sale the shop has 48 - 38 = 10 books left. Therefore, they use 10 / 5 = 2 shelves. The answer is
2.
Q: A store had 27 coloring books in stock. They ended up putting them on sale and getting rid of 6 of them. The put the
ones they still had onto shelves with 7 on each shelf. How many shelves did they use?
A: Let’s think step by step. After the sale the shop has 27 - 6 = 21 books left. Therefore, they use 21 / 7 = 3 shelves. The answer is 3.
Q: Emily was playing a trivia game. In the first round she scored 16 points and in the second round she scored 33 points.
In the last round she lost 48 points. How many points did she have at the end of the game?
A: Let’s think step by step. Emily scored 16 + 33 = 49 points. Therefore, in the end she has 49 - 48 = 1 point left. The answer is 1.
Q: Chloe was playing a trivia game. In the first round she scored 40 points and in the second round she scored 50 points. In
the last round she lost 4 points. How many points did she have at the end of the game?
A: Let’s think step by step. Chloe scored 40 + 50 = 90 points. Therefore, in the end she has 90 - 4 = 86 points left. The answer is 86.
Q: {question}
A: Let’s think step by step.
Figure 30: 8-Shot Prompt for Multi-Arith dataset for Complex CoT.
-----
Q: Jerry had 7 books and 3 action figures on a shelf in his room. Later he added 2 more action figures to the shelf. How many more
books than action figures were on his shelf?
A: Let’s think step by step. Jerry had 7 books and 3 + 2 = 5 figures on the shelf. Therefore, he has 7 - 5 = 2 more book and action
figures. The answer is 2.
Q: Faye had 35 packs of pencils each one having 4 pencils. She was placing her pencils into rows with 2 pencils in each
row. How many rows could she make?
A: Let’s think step by step. Faye can make 4 / 2 = 2 rows with one pack. Therefore, she can make in total 35 * 2 = 70 rows. The
answer is 70.
Q: After a typhoon, 13 trees in Haley’s backyard died. If she had grown 15 trees initially How many more trees died in the
typhoon than those that survived?
A: Let’s think step by step. The typhoon 15 - 13 = 2 trees survived. Since 13 died, 13 - 2 = 11 trees died more then survives. The
answer is 2.
Q: The Razorback t-shirt shop makes $ 78 dollars off each t-shirt sold. During the Arkansas game and the Texas tech game
they sold a total of 186 t-shirts. If they sold 172 t-shirts during the Arkansas game How much money did they make from selling the
t-shirts during the Texas tech game?
A: Let’s think step by step. They sold 186 - 172 = 14 in the Texas game, they made 78 * 14 = 1092 dollars. The answer is 1092.
Q: Jack received 3 emails in the afternoon, 6 emails in the morning and some more in the evening. If he received a total of
10 emails in the day How many emails did jack receive in the evening?
A: Let’s think step by step. He received 10 - ( 6 + 3 ) = 1 email in the afternoon. The answer is 1.
Q: Jack received 6 emails in the morning, 3 emails in the afternoon and some more in the evening. If he received a total of
10 emails in the day How many emails did Jack receive in the afternoon?
A: Let’s think step by step. He received 10 - ( 3 + 6 ) = 1 email in the afternoon. The answer is 1.
Q: Faye was placing her pencils into rows with 16 pencils in each row. She had 28 packs of pencils each one having 24
pencils. How many rows could she make?
A: Let’s think step by step. Faye can make 24 / 16 = 3 / 2 rows with one pack. Therefore, she can make in total 28 * 3 / 2 = 42 rows.
The answer is 42.
Q: There were 13 roses in the vase. Jessica cut some more roses from her flower garden which had a total of 12 roses.
There are now 21 roses in the vase. How many roses are left in the garden?
A: Let’s think step by step. To get 21 roses, Jessica cut 21 - 13 = 8 roses from the garden. Therefore, there are 12 - 8 = 4 roses left.
The answer is 4.
Q: {question}
A: Let’s think step by step.
Figure 31: 8-Shot Prompt for SVAMP dataset for Complex CoT.
-----
Q: The speed at which a man can row a boat in still water is 25 kmph. If he rows downstream, where the speed of current is 11
kmph, what time will he take to cover 80 metres?
A)18 seconds
B)27 seconds
C)26 seconds
D)12 seconds
E)8 seconds
A: Let’s think step by step. Speed of the boat downstream = 25 +11 = 36 kmph = 36 * 5/18 = 10 m/s Hence time taken to cover 80
m = 80/10 = 8 seconds. Answer:E
Q: The entrance fee for a fair is $5 for persons under the age of 18, and 20% more for persons older. Each ride at the fair
costs $0.50. If Joe goes with her 6 years old twin brothers, and they each took 3 rides in total. How much money does Joe end up
spending at the fair?
A)16
B)20.5
C)17.5
D)20
E)4.5
A: Let’s think step by step. Total entrance fee is (2*$5) + (1.20*5)= $16 Total rides fee is (0.50*3)*3= $4.50 Total money spent is
$20.50 Answer is B
Q: A survey reveals that the average income of a company’s customers is $45,000 per year. If 50 customers respond to the
survey and the average income of the wealthiest 10 of those customers is $95,000, what is the average income of the other 40
customers? Is there a way to solve this using weighted average concept instead of doing tedious calculations?
A) $32,500
B) $35,000
C) $37,500
D) $42,500
E) $50,000
A: Let’s think step by step. let x be the average of 40 customers 40*x + 10* 95000 = 50*45000 solving this we have x= 32500
Answer is A.
Q: A $500 investment and a $1,500 investment have a combined yearly return of 19 percent of the total of the two investments. If the $500 investment has a yearly return of 7 percent, what percent yearly return does the $1,500 investment have?
A)9%
B)10%
C)23%
D)21%
E)22%
A: Let’s think step by step. The equation we can form the question : Return on Total Investment = Sum of individual Investments
(500+1500)(19)=(500ˆaˆ—7)+(1500x), where x is the return on investment of 1500. Solving the equation, we get x = 23% ( Option
C ) ANSWER:C
Q: {question}
A: Let’s think step by step.
Figure 32: 8-Shot Prompt for AQuA dataset for Complex CoT.
-----
Q: What is the degree of the polynomial (4 + 5x[3] + 100 + 2πx[4] + _√10x[4]_ + 9)?
A: Let’s think step by step. This polynomial is not written in standard form. However, we don’t need to write it in standard form, nor
do we need to pay attention to the coefficients. We just look for the exponents on x. We have an x[4] term and no other term of higher
degree, so 4 is the degree of the polynomial. Final answer: 4
Q: At what point does the line containing the points (1, 7) and (3, 11) intersect the y-axis? Express your answer as an ordered pair.
A: Let’s think step by step. The y-axis is where the x-coordinate is 0. Using the given points, as the x-coordinate decreases by 2, the
_y-coordinate decreases by 4. So as the x-coordinate decreases by 1 from 1 to 0, the y-coordinate will decrease by 2 from 7 to 5. The_
point is (0, 5) . Final answer: (0,5)
Q: Let C be the circle with equation x[2] _−_ 6y − 3 = −y[2] _−_ 4x. If (a, b) is the center of C and r is its radius, what is the
value of a + b + r?
A: Let’s think step by step. We can rewrite the equation x[2] _−_ 6y − 3 = −y[2] _−_ 4x as x[2] + 4x + y[2] _−_ 6y = 3. Completing the
square, we have (x + 2)[2] _−_ 4 + (y − 3)[2] _−_ 9 = 3, or (x + 2)[2] + (y − 3)[2] = 16. This is the equation of a circle of radius r = 4
and with center (a, b) = (−2, 3). Therefore, a + b + r = −2 + 3 + 4 = 5 . Final answer: 5
Q: What is the value of x in the equation 16[16] + 16[16] + 16[16] + 16[16] = 2[x]?
A: Let’s think step by step. We rewrite the left side 16[16] + 16[16] + 16[16] + 16[16] as 4 · 16[16] = 2[2] _· (2[4])[16]_ = 2[2] _· 2[64]_ = 2[66]. We have
2[66] = 2[x], so the value of x is 66 . Final answer: 66
Q: A geometric sequence of positive integers is formed for which the first term is 3 and the fourth term is 192. What is the
third term of the sequence?
A: Let’s think step by step. Let the geometric sequence have common ratio r. We know that 3 · r[3] = 192, or r = 4. Thus, the third
term is 3 · r[2] = 3 · 4[2] = 48 . Final answer: 48
Q: A geometric sequence of positive integers is formed for which the first term is 2 and the fifth term is 162. What is the
sixth term of the sequence?
A: Let’s think step by step. Let the geometric sequence have common ratio r. We know that 2 · r[4] = 162, or r = 3. Thus, the sixth
term is 2 · r[5] = 2 · 3[5] = 486 . Final answer: 486
Q: {question}
A: Let’s think step by step.
Figure 33: 8-Shot Prompt for MATH dataset for Complex CoT.
-----
| [
"Kathrin, Seßler",
"Yao, Rong",
"Emek, Gözlüklü",
"Enkelejda, Kasneci"
] | 2024-08-20T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2408.10839v1 | https://arxiv.org/abs/2408.10839 | https://www.semanticscholar.org/paper/f57225975fb3fcbc12fe056692405ba09321fc3e |
Benchmarking the Reasoning Robustness against Noisy Rationales in Chain-of-thought Prompting | This paper investigates an under-explored challenge in large language models (LLMs): chain-of-thought prompting with noisy rationales—irrelevant or inaccurate reasoning steps—despite advancements in in-context learning. We construct the NoRa dataset, specifically designed to evaluate LLMs’ robustness to noisy rationales, based on which we reveal a widespread vulnerability among LLMs to such noise, with limited efficacy from existing reasoning methods. To combat this, we propose the contrastive denoising with noisy chain-of-thought (CD-CoT) method to enhance denoising-reasoning capabilities by contrasting noisy rationales with only one clean rationale, thereby advancing the robustness of LLMs in reasoning. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/95956 | null | null |
Beyond the tactic-state automaton | N/A | null | # BEYOND THE TACTIC-STATE AUTOMATON
**Daniel Selsam**
Microsoft Research
Redmond, WA, USA
[email protected]
ABSTRACT
Most prior work applying machine learning to higher-order theorem proving has
adopted the tactic-state automata idiom in which the ML agent maps tactic-states
to tactic-state transformations (i.e. tactics). This approach is appealing in its simplicity but suffers many limitations. We introduce a new way to define sophisticated search spaces and discuss several ways of connecting them to ML oracles.
1 INTRODUCTION
Although the details differ among systems, generally speaking a tactic-state is a list of goals, where
each goal is a sequent, i.e. a list of hypotheses and a goal to be proven. The goals within a tactic-state
are often independent but may be coupled by shared metavariables, i.e. fixed, not-yet-determined
values that appear in multiple goals. A tactic is an arbitrary (functional) program that can read and
write to a tactic-state, and that may also fail. Like regular programs, tactics can call each other, take
each other as functions, develop their own internal datastructures, return values of arbitrary types,
and in general can perform a lot of work that is not made visible as modifications to the tactic-state.
In the tactic-state automata idiom, an ML agent maps tactic-states to tactics. In light of the generality of tactics described above, it is worth considering why this approach may work at all. The answer
is that tactic frameworks are designed to support two very different use-cases: interactive mode, in
which users execute pre-written tactics to observe their effect on the tactic-state, and automation
mode, in which users implement (potentially sophisticated) tactics that may be used later in interactive mode or be run on goals in an off-line manner. There happens to exist many pre-written tactics
suitable for interactive mode, and the tactic-state automata idiom may be sufficient to learn how to
mix, match, and instantiate these pre-written tactics in better ways. However, this idiom does not
provide assistance in automation mode.
One may argue that the tactic-state automata idiom suffers no practical bound on its power since
most proofs in most formal mathematics libraries have relatively short representations in terms of
a relatively small number of tactic primitives. However, almost every proof in every formal mathematics library was already known informally to the formalizer. Even if the pre-written tactics are
sufficient to express proofs of the theorems in question, this does not imply that tree-search in the
action space induced by these primitives is a good way of solving challenging new problems, e.g.
problems arising in the IMO Grand Challenge. Even if this inference would hold in the infinite-data
limit, it seems particularly unjustified in practice given the severe dearth of available training data.
One natural question is: what are humans taught explicitly that tactic-state automata ML are forced
to induce? While humans see only a meager number of proofs, they are explicitly taught many highlevel problem solving strategies. Manifesting these strategies ostensibly requires writing novel,
sophisticated (automation-mode) tactics.[1] However, the strategies taught for e.g. solving olympiad
problems are in general so high-level, so ill-constrained, that a natural encoding of them will necessarily be littered with heuristic choices to make. Ideally, machine learning could provide good
heuristics within these tactics, but this requires moving beyond the tactic-state automata idiom. The
rest of this paper considers how this might be achieved.
1One could also co-train language models on descriptions of these strategies, though the plausibility of this
approach has not been established.
-----
2 THE SEARCH TRANSFORMER
We now present our main abstraction, which we call the search transformer, in its simplest form. Our
presentation will reference many concepts from functional programming (e.g. monad transformers)
though we will try to explain the relevant parts of each concept we reference.
The search transformer is built on the following three mutually inductive types:[2]
inductive SearchT (m : Type → Type) (α : Type) : Type
| mk : m (Status m a)
inductive Status (m : Type → Type) (α : Type) : Type
| done : α → Status m a
| choicepoint : ChoicePoint m a → Status m a
structure ChoicePoint (m : Type → Type) (α : Type) := {
choices : List (SearchT m a)
}
Here m : Type → Type is expected to be a monad, which for our present purposes can be interpreted as follows: m describes some set of effects such that for all types α, the type m α : Type
represents programs that return elements of type α but that may in addition perform any of the effects
allowed by m. An example is the tactic monad, TacticM, which (as discussed above) allows reading
and writing to a tactic-state object. Thus an element of type SearchT m α is a computation in m
that either returns an element of α as usual (via done) or else returns a list of SearchT m α values
(via choicepoint) representing a set of possible futures to choose among. We define choice xs
to mean SearchT.mk (pure (choicepoint (ChoicePoint.mk xs))), where pure x is the
m α computation that performs no effects and simply returns x.
When m is indeed a monad, SearchT m α is a monad as well. For our present purposes, the important implication is that we can implement SearchT programs using the convenient do-notation
pioneered by Haskell Jones (1995) and since adopted by other languages including Lean. For example, we can write a program that nondeterministically chooses two bools and returns the pair in a
natural way:
def twoBoolsDo : SearchT m (Bool × Bool) := do
let b1 ← choice [false, true]
let b2 ← choice [false, true]
pure (b1, b2)
**Extensions.** There are countless ways to extend the simple SearchT presented above. There could
also be a primitive for choosing unordered subsets of a set as in Bavishi et al. (2019) or to support
principled decomposition into subgoals (e.g. by tabling as in Selsam et al. (2020b)). There could
also be a primitive for generating a string, generate : String → (String → SearchT m
_α) →_ Status m α, where the string is to be generated by a language model and then parsed in
the downstream computation.
**Search.** A defining feature of the search transformer is that the search space induced by a SearchT
program is abstracted away from the strategies that one may use to search the space. Thus search
spaces and search strategies can be implemented separately. For example, here is pseudocode[3] for a
generic depth-first search:
def dfs (ψ : SearchT m α) : m (Option α) := do
let mut todo := #[ψ]
while todo do
let status ← todo.back
todo := todo.pop
2We use the syntax of the Lean Theorem Prover de Moura et al. (2015) throughout, though due to idiosyncracies in Lean’s metatheory, some additional indirection is required to define SearchT which we omit in this
presentation.
3This pseudocode is simplified only slightly from working code.
-----
match status with
| done x => return (some x)
| choicepoint cp => todo := todo ++ cp.choices
return none
We first initialize a (mutable) todo stack with ψ, and as long as the stack is nonempty, we pop an
element from it, run it, and either return the result (if it returns done) or add the new choices to the
stack (if it returns choicepoint). Note that this snippet assumes that any branch-specific state is
made explicit, e.g. by a StateT transformation above SearchT. We can relax this requirement by
allowing m to provide save and restore methods for the part of its state that is branch-specific and
having dfs call them at the appropriate times.
Other non-heuristic strategies are equally straightforward to implement, e.g. random search, breadthfirst search, and iterative deepening. To implement heuristic search (e.g. MCTS) we need a function
that maps ChoicePoints to either policy scores, value estimates, or both:
structure Guess { policy : Vector Float, value : Float }
structure Oracle m a := { ChoicePoint m a → Guess }
The next section discusses several ways that machine learning could provide such an oracle.
3 MACHINE LEARNING
The search transformer as presented in Section 2 provides almost no information for a heuristic
to go on. Specifically, when a search procedure stumbles on a new choicepoint cp, it has no
way to even distinguish the choices, except based on trivialities like their positions in the list. This
is because a ChoicePoint is simply a list of SearchT m α objects, and such objects have no
inspectable structure. We now discuss several possible sources of signal for a learned heuristic.
**Explicit prompts.** The ChoicePoint type can be extended to take a value of some type σ; this
argument could represent a user-specified prompt describing the choicepoint and so allow learning
a value function σ → R. Note that we used the word prompt rather than observation because it
does not in general suffice to describe only the current state of the user’s datastructures; there may
be relevant information that only exists implicitly on the stack or in the code defining the choices
available at the current choicepoint. It may not be clear how a programmer should create such a
prompt in general.
**Explicit choice summaries.** The ChoicePoint type can be extended further to take, for each
candidate choice, a corresponding element of type γ that represents some summary of the choice
and so allow learning a policy function σ → _γ[k]_ _→_ R[k]. In the tactic-machine-idiom, the summary of a choice is effectively the string representing the corresponding tactic. Although simple,
choice summaries pose similar issues as explicit prompts do: a choice may represent an arbitrarily
sophisticated computation, and it may not be clear how a programmer should “summarize” it.
**Pseudo environments.** Additional data could be tracked by the nondeterministic programs in the
form of a (pseudo) environment mapping identifiers to stacks of arbitrary (embeddable) datatypes,
for the purpose of enriching the explicit prompts. The necessary bookkeeping could be hidden as
much as possible by syntactic sugar. The let construct could be interpreted as sugar for first pushing
the value to the pseudo environment, and then popping it when the variable goes out of scope. Every
function call that may make a nondeterministic choice could first create a new local environment
and then restore the old one upon exiting.
**Self-contained ML-powered subroutines.** Another approach that is complementary to ranking
choices during search is to provide self-contained ML-powered subroutines for various search strategies to call. For example, ML may be used within an information retrieval system that maps goals to
plausibly relevant lemmas without any specific heed as to how the lemmas will actually be used by
a given caller. Many problem solving strategies are parameterized by previously established facts,
and so such an API may provide useful support for a wide range of strategies. A self-contained MLpowered module that conjectures upper or lower bounds for various terms may be widely applicable
-----
as well. ML can also be applied within stand-alone provers for simpler logics, e.g. within superposition solvers Loos et al. (2017) or SAT solvers Selsam & Bjørner (2019). Of course, an ML-based
prover trained in the tactic-state automata idiom could constitute a useful subroutine as well. More
specialized ML-powered subroutines could also provide value in certain circumstances, e.g. one that
suggests promising auxiliary points for geometry problems. This approach is appealingly modular and in most circumstances would be best practice; however, modularity can be a double-edged
sword in machine learning, especially when data is scarce, since the more one fragments the data,
the less each model has to train on.
**Direct inspection via metaprogramming.** The last approach we consider is to use metaprogramming to directly inspect the SearchT m α candidates in order to automatically encode
each one in a form suitable for a machine learning system. This is essentially the approach proposed
in Selsam et al. (2020a), though whereas they built an entirely new type of programming language to
support it, here we consider lightweight approaches to harness similar power within general purpose
languages. The feasibility depends heavily on the details of the language being extended.
In Python, a barebones SearchT program can be approximated as a thunk that either returns a special ChoicePoint object or a regular value. In this encoding, the inspect module for inspecting
live objects together with the dis module for disassembling Python bytecode make it relatively
straightforward to construct a lossless encoding of a given choice. Specifically, the bytecode of the
choice (and whatever functions it calls in turn) can be traversed at runtime, and all symbols can
be easily resolved as well. The situation is more complicated in Lean (version 4) since Lean is a
high-performance language whose runtime does not include type information, function names, nor
an environment. Nonetheless we can simulate the Python approach as follows:
1. Create a new inductive type Object to represent runtime objects.
2. Add a new primitive inspect : NonScalar → IO Object that structurally traverses
any non-scalar and produces a corresponding Object. Using the procedure dladdr, it
can resolve function (void *) addresses to (mangled) names.
3. To inspect a non-scalar term x, call inspect (unsafeCast x : NonScalar).
4. To resolve function names appearing in the resulting Object, create a new Lean environment that imports the necessary modules; then after unmangling the function names, one
can lookup the Lean IR code corresponding to each function referenced in the Object (and
in other functions recursively thereafter).
This approach has a significant disadvantage in Lean compared to explicit prompts for inspecting the
current state itself: a custom, type-aware embedding of the state datastructure itself may be much
more compact than the runtime object that represents it. For example, it is common to show machine
learning models only pretty-printed expressions, which discard troves of irrelevant information from
the original expressions. Similarly, a Lean tactic-state does not just include the list of open goals
but also includes a metavariable context containing information about previously solved goals that
is not relevant for solving the open ones. Neither of these concerns would be significant if there
were runtime type information since the embeddings could be user-defined and type-dependent.
Unfortunately, we do not see how to make the generic metaprogramming approach practical without
runtime type information.
4 DISCUSSION
Ultimately, we see no silver bullet for guiding arbitrary nondeterministic tactics in practice. On the
other hand, we also do not see how a tactic-state automaton could employ known techniques such as
building geometry diagrams and inspecting them to make conjectures. We still consider it an open
problem how to achieve the best of both worlds, expert strategies and ML.
ACKNOWLEDGMENTS
We thank Jesse Michael Han, Ryan Krueger, Leonardo de Moura, Sebastian Ullrich and Patrice
Godefroid for helpful discussions and feedback.
-----
REFERENCES
Rohan Bavishi, Caroline Lemieux, Roy Fox, Koushik Sen, and Ion Stoica. Autopandas: neuralbacked generators for program synthesis. Proceedings of the ACM on Programming Languages,
3(OOPSLA):1–27, 2019.
Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, and Jakob von Raumer. The
lean theorem prover (system description). In International Conference on Automated Deduction,
pp. 378–388. Springer, 2015.
Mark P Jones. A system of constructor classes: overloading and implicit higher-order polymorphism. Journal of functional programming, 5(1):1–35, 1995.
Sarah Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network guided proof
search. arXiv preprint arXiv:1701.06972, 2017.
Daniel Selsam and Nikolaj Bjørner. Guiding high-performance sat solvers with unsat-core predictions. In International Conference on Theory and Applications of Satisfiability Testing, pp.
336–353. Springer, 2019.
Daniel Selsam, Jesse Michael Han, Leonardo de Moura, and Patrice Godefroid. Universal policies
for software-defined mdps. arXiv preprint arXiv:2012.11401, 2020a.
Daniel Selsam, Sebastian Ullrich, and Leonardo de Moura. Tabled typeclass resolution. arXiv
_preprint arXiv:2001.04301, 2020b._
-----
| [
"Daniel, Selsam"
] | 2021-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Bi-Chainer: Automated Large Language Models Reasoning with Bidirectional Chaining | Large Language Models (LLMs) have shown human-like reasoning abilities but still face challenges in solving complex logical problems. Existing unidirectional chaining methods, such as forward chaining and backward chaining, suffer from issues like low prediction accuracy and efficiency. To address these, we propose a bidirectional chaining method, Bi-Chainer, which dynamically switches to depth-first reasoning in the opposite reasoning direction when it encounters multiple branching options within the current direction. Thus, the intermediate reasoning results can be utilized as guidance to facilitate the reasoning process. We show that Bi-Chainer achieves sizable accuracy boots over unidirectional chaining frameworks on four challenging logical reasoning datasets. Moreover, Bi-Chainer enhances the accuracy of intermediate proof steps and reduces the average number of inference calls, resulting in more efficient and accurate reasoning. | A bidirectional chaining method, Bi-Chainer, which dynamically switches to depth-first reasoning in the opposite reasoning direction when it encounters multiple branching options within the current direction, resulting in more efficient and accurate reasoning. | # Bi-Chainer: Automated Large Language Models Reasoning with Bidirectional Chaining
**Shuqi Liu[1]** **Bowei He[1]** **Linqi Song[1][,][2][∗]**
1Department of Computer Science, City University of Hong Kong
2
Shenzhen Research Institute, City University of Hong Kong
{shuqiliu4-c, boweihe2-c}@my.cityu.edu.hk
[email protected]
**Abstract**
Large Language Models (LLMs) have shown
human-like reasoning abilities but still face
challenges in solving complex logical problems. Existing unidirectional chaining methods, such as forward chaining and backward
chaining, suffer from issues like low prediction
accuracy and efficiency. To address these, we
propose a bidirectional chaining method, BiChainer, which dynamically switches to depthfirst reasoning in the opposite reasoning direction when it encounters multiple branching options within the current direction. Thus, the intermediate reasoning results can be utilized as
guidance to facilitate the reasoning process. We
show that Bi-Chainer achieves sizable accuracy
boots over unidirectional chaining frameworks
on four challenging logical reasoning datasets.
Moreover, Bi-Chainer enhances the accuracy
of intermediate proof steps and reduces the average number of inference calls, resulting in
more efficient and accurate reasoning.
**1** **Introduction**
Automated reasoning involves deriving accurate
and valid conclusions from explicitly given knowledge (McCarthy and SCIENCE, 1963). Logical
reasoning, particularly in the context of unstructured natural language text, is essential for automated knowledge discovery and has promising implications for advancements in diverse scientific
fields. Recently, large language models (LLMs)
(Touvron et al., 2023; Ouyang et al., 2022; OpenAI, 2023) have shown promising progress in emulating human-like reasoning abilities (Wei et al.,
2022). However, they still face challenges when
it comes to complex multi-step logical reasoning
problems (Creswell et al., 2022; Kazemi et al.,
2023; Valmeekam et al., 2022).
Recent studies have enhanced reasoning capabilities by employing a modular approach that breaks
_∗*Corresponding author_
down complex tasks into smaller, more manageable
components. Selection-Inference (SI) (Creswell
et al., 2022) utilizes forward chaining that employs
iterative selection and inference steps to draw conclusions. However, the absence of explicit guidance directly targeting the goal results in subpar
and imprecise selection. On the other hand, LAMBADA (Kazemi et al., 2023) utilizes backward
chaining to handle multi-step reasoning. It starts
with the goal, recursively selects rules, and iteratively proves decomposed sub-goals. However,
LAMBADA’s ranking strategy, which prioritizes
shorter rules assuming higher success rates, may
not always be accurate. This can result in suboptimal performance and hinder the overall efficiency
of the reasoning process.
This drives our exploration of the bi-directional
chaining method that enables forward chaining
with explicit guidance towards the goal and facilitates backward chaining using determinate facts
derived from forward chaining. As illustrated in
Figure 1, we present Bi-Chainer, a modular reasoning framework that incorporates bi-directional
chaining. Bi-Chainer can be understood as a bidirectional depth-first search algorithm that dynamically switches to the opposite reasoning direction when it encounters multiple branching options
within the current direction. Consequently, the intermediate reasoning outcomes obtained from the
opposite reasoning direction can be employed as
guidance to enhance the ongoing reasoning process
on the current side.
We showcase the adaptability and effectiveness
of Bi-Chainer on four logical reasoning datasets:
ProofWriter (Tafjord et al., 2021), FOLIO (Han
et al., 2022), AR-LSAT (Zhong et al., 2022), and
ParaRules (Clark et al., 2021). The datasets encompass a broad range of logical reasoning problems,
including deductive reasoning, first-order logic reasoning, and analytical reasoning. In addition to
achieving quantitative improvements over unidi
8578
-----
**Hypothesis: The tiger is green. Proved**
**Premises: 1. The mouse is green. 2. The squirrel likes the dog. 3. The tiger is blue. 4. If someone is blue, they eat the squirrel.**
5. If someone is green then they like the squirrel. 6. If someone likes the dog and they are green then they are blue.
7. If the tiger is big then they are green. 8. If someone is blue and they eat the squirrel then the tiger is green.
**Ground Truth Proof: (1) Premise 3, 4** **_infers Tiger eats the squirrel (Inf 1); (2) Premise 3, Premise 8 and Inf 1_** **_infers The tiger is green._**
**(a) Selection Inference - Forward Chaining**
Iteration 1 Iteration 2
Premises Selection Inference Premises Selection Inference Premises Hypothesis **Unknown**
Premise 3, 4 Tiger eats the squirrel Premise1, 5 Mouse likes squirrel Tiger is green.
Select wrong premise
**(b) LAMBADA - Backward Chaining**
Iteration 1 Iteration 2
Selection Start from wrong premise
Hypothesis Premise 7 Rerank Premise 7 Inference Selection Inference Premises
Tiger is green. SelectionPremise 8 Prioritize premise 7 with fewer conditions.Need to prove tiger is big. Premise 6 Do not know. **Unknown**
Premise 7: If tiger is big then they are green. Premise 8: If some one is blue and they eat the squirrel then they are green.
**(c) Bi-Chainer - Bidirectional Chaining**
Iteration 1 **Multiple deductions** **Switch to Backward Chaining** Iteration 2
Selection Inference **_True_**
Premises Premise1, 5 Mouse likes squirrel Premises Confusion Premises Inference Selection Hypothesis
Check
Selection Inference Premise 8 **Proved**
Premise 3, 4 Tiger eats the squirrel **Terminate: all conditions in Premise 8 are proved.**
Premise 3: The tiger is blue. Inf 1: Tiger eats the squirrel. Premise 8: If some one is blue and they eat the squirrel then they are green.
Figure 1: Bi-Chainer framework in bidirectional chaining (c) in comparison with the Selection-Inference framework
in forward chaining (a) and the LAMBADA framework in backward chaining (b).
rectional chaining methods, the Bi-Chainer framework also offers qualitative advantages. Firstly, it
enhances the accuracy of intermediate proof steps,
resulting in more reliable and correct reasoning outcomes at each stage. Secondly, Bi-Chainer reduces
the number of inference calls needed during the
reasoning process. By utilizing guidance from the
opposite side, Bi-Chainer eliminates unnecessary
and redundant inference steps.
**2** **Related Works**
Recent advancements in large language models, such as LLaMA (Touvron et al., 2023),
PaLM (Chowdhery et al., 2023), and GPT-4 (OpenAI, 2023), have demonstrated surprising humanlike intelligence in the area of multi-step logical
reasoning. Due to its huge application potential for
various applications, including problem-solving,
decision-making, and critical thinking (Huang and
Chang, 2023), numerous research efforts have been
dedicated to improving or eliciting the reasoning
ability of these language models. Most of them can
be classified into three categories:
**Fully Supervised Finetuning: Some previous**
methods (Rajani et al., 2019; Hendrycks et al.,
2021) have employed fine-tuning techniques on
pre-trained language models (LMs) using downstream datasets to generate rationales or step-by
step solutions, effectively performing reasoning
until obtaining final answers. However, these methods heavily rely on meticulously constructed finetuning datasets that explicitly capture the reasoning process. Unfortunately, such high-quality data
is often difficult to access or requires substantial
resources to create. Moreover, this reliance on specific datasets can restrict the extension of the LM’s
reasoning abilities to other open-ended reasoning
tasks beyond the domain of the fine-tuning dataset.
**Prompting & In-Context Learning: The Chain**
_of Thought (CoT) and its variants (Wei et al., 2022)_
are the most common approaches to release and
utilize LLM’s reasoning capabilities. CoT guides
the model to generate explicit step-by-step rationale before producing the final results. The ra_tionale engineering techniques like rational refine-_
ment/exploration/verification are complementary
to CoT for further eliciting the reasoning capabilities more effectively via refining demonstration rationale examples (Fu et al., 2023), encouraging exploring diverse reasoning ways (Wang et al., 2023),
and verifying if the generated rationales by LLMs
lead to correct final answers (Weng et al., 2023).
_Problem decomposition methods (Zhou et al., 2023;_
Press et al., 2023) can also help facilitate the CoT
reasoning when tackling complex tasks by decomposing the complex problems into relatively simpler subproblems. It should be mentioned that most
8579
-----
**Algorithm 1 Bi-Chainer**
**Input: Premises C = (F, R), Hypothesis H with**
condition P and consequent Q, Max-Depth D.
1: F(H) = FactIdentify(H, F)
2: while not reach maximum steps D do
3: **if ForwardChaining then**
4: **repeat**
5: _Rd = RuleSelection(F(H), R, Q)_
6: _Fd = LogicDeduction(F(H), Rd)_
7: Update and ( ) with _d_
_F_ _F_ _H_ _F_
8: _v = FactCheck(H, F)_
9: _c = ConfusionCheck(_ _d)_
_F_
10: **until c is True**
11: Switch to BackwardChaining
12: **end if**
13: **if BackwardChaining then**
14: **repeat**
15: _Ra = RuleSelection(Q, R)_
16: _Fa = LogicAbduction(Q, Ra)_
17: = _a_
_Q_ _F_
18: _v = FactCheck(Q, F)_
19: _c = ConfusionCheck(_ _a)_
_F_
20: **until c is True**
21: Switch to ForwardChaining
22: **end if**
23: **if v is not Unknown then**
24: **return v**
25: **end if**
26: end while
27: return Unknown
Figure 1 illustrates the application of bidirectional chaining in proving a hypothesis using a
set of premises. Initially, forward chaining is employed to derive more definite facts and update the
premises. In the forward chaining process, deductions are made based on selected premises, such as
Premises 3 and 4 leading to the deduction "Tiger
eats the squirrel", and Premises 1 and 5 establishing
the deduction "Mouse likes squirrel". However, as
multiple deductions are obtained, further forward
chaining becomes confusing on which deduction
to select to continue the chaining process. Therefore, the Confusion Check module triggers a switch
to backward chaining. In the backward chaining
phase, both Premise 7 and Premise 8 support the
consequence of the hypothesis "Someone is green".
However, Premise 8’s conditions can all be proven
using the intermediate deductions obtained from
forward chaining. As a result, the hypothesis is
successfully proved using bi-directional chaining.
previous methods are forward chaining reasoning
while only a few works (Kazemi et al., 2023) have
noticed its drawbacks and tried conducting backward chaining reasoning.
**Hybrid Methods: Some other methods propose**
to simultaneously enhance and elicit the reasoning
capabilities of LLMs with training and prompting
techniques, respectively, like reasoning-enhanced
_training and prompting (Chung et al., 2022) and_
_bootstrapping & self-improving (Zelikman et al.,_
2022; Huang et al., 2023).
Our work lies in the category of prompting &
in-context learning and aims to fully release the
multi-step logical reasoning ability embedded in
powerful LLMs like GPT-4.
**3** **Methodology**
In this section, we introduce the Bi-Chainer framework, which automates logical reasoning over natural language premises using bidirectional chaining.
The premises C consist of a set of facts F and rules
_R, where rules can be deductive, first-order logic,_
or analytical reasoning statements. The framework
aims to prove or disprove a hypothesis H based on
the given premises. The hypothesis and the premise
follow the form “If P, then Q", where P represents
the condition and Q represents the consequent.
**3.1** **Bi-directional Chaining**
Bidirectional chaining is a reasoning strategy that
combines both forward and backward chaining to
facilitate the inference process. It involves simultaneous exploration in both directions, starting from
the available facts and working forward to derive
new conclusions, while also starting from the goal
and working backward to decompose the goal into
sub-goals using applicable rules. In our research,
we define the existence of multiple deductions or
abductions as a confusion state since we aim to
ensure a depth-first searching process, thereby reducing the number of LLM calls. In a depth-first
search, when faced with multiple deductions or abductions at a single reasoning step, the challenge
lies in selecting the most suitable deduction to continue the chaining process. Therefore, we describe
this challenge as a confusion in the reasoning process. As the term confusion signifies the need to
resolve this ambiguity and make decisions to continue the reasoning chain effectively.
8580
-----
**3.2** **LLM Modules in Bi-Chainer**
To enable applying bidirectioanl chaining for textbased reasoning, we introduce six LLM-based modules: Fact Identification, Rule Selection, Logic Deduction, Logic Abduction, Fact Check, and Confusion Check. Each module is implemented by providing instructions with relevant in-context demonstrations to an LLM (see Appendix C.3 for details).
We describe these modules and then proceed to the
full algorithm.
**Fact Identification Module. Given the facts F**
from the premises and the hypothesis H, the Fact
Identification module is responsible for identifying
relevant facts F(H) ∈F that contribute to proving
the hypothesis.
**Rule Selection Module. Given a set of rules R**
from the premises and a hypothesis H, the Rule
Selection module in Forward Chaining identifies
a subset of rules _d_ such that the condition
_R_ _∈R_
of the rule entails with the facts F(H) and the
consequent of the rule entails with the hypothesis
consequent Q. If a rule exists that satisfies these
conditions, it is returned as it serves as a bridge
between the known facts and the hypothesis, facilitating the concatenation of forward and backward
chaining. However, if no such rule is found, only
the rules that can be entailed by the known facts
are returned. The Rule Selection module in Backward Chaining identifies a subset of rules _a_
_R_ _∈R_
such that the consequent of the rule unifies with the
consequent of the hypothesis Q.
**Logic Deduction & Logic Abduction Modules.**
The Logic Deduction module focuses on deductive
reasoning, starting from known facts F(H) and the
deductive rules Rd to derive a set of new conclusions _d. The Logic Abduction module, on the_
_F_
other hand, deals with abductive reasoning. It aims
to generate plausible explanations Fa that best lead
to the hypothesis consequent Q according to the
abductive rules _a. The generated explanations_
_R_
are then treated as new consequences that need to
be proven or validated.
**Fact Check. Given the facts F from the premises,**
the Fact Check module verifies if the hypothesis
_H entails (in which case the hypothesis is proved)_
or contradicts (in which case the hypothesis is disproved) with the facts. If no such fact can be found,
then the truth of H remains unknown.
**Confusion Check Module. The Confusion Check**
module determines the moment to switch between
forward and backward chaining. We define a situa
tion where confusion happens when executing the
uni-directional chaining at a single step, multiple
deductions (in forward chaining) or abductions (in
backward chaining) emerge. In the bidirectional
chaining process, if each reasoning step produces
consistent deduction Fd or abduction results Fa
based on the selected rules, the reasoning continues in that direction. However, if different results
emerge at each step, it indicates that the system
may be confused in selecting the appropriate rule
to proceed with. In such cases, the reasoning is
temporarily paused, and the other direction of reasoning is allowed to continue for a few steps to
gather additional information that can aid in determining the reasoning path in the current direction.
Bidirectional chaining thus involves continuously
switching between forward and backward chaining
until a rule is found that connects the consequent
of forward chaining with a plausible explanation
derived from backward chaining, or until the maximum step limit is reached.
**3.3** **The Bi-Chainer Algorithm**
Algorithm 1 provides a high-level description of
how the six LLM modules described earlier can
be integrated with bidirectional chaining to enable
text-based logical reasoning (the function calls corresponding to LLM modules are color-coded).
Bi-Chainer can be understood as a bidirectional
depth-first algorithm that focuses on reasoning with
premises. It employs a depth-first search approach
and switches between reasoning directions when
faced with multiple branching options. Bi-Chainer
takes a set of premises C = (F, R), a Hypothesis
_H with condition P and consequent Q, and a depth_
limit D as input. The algorithm starts by using the
_Fact Identify module to find facts F(P) that are es-_
sential for proving the hypothesis. It then employs
forward chaining to iteratively expand the determinate facts that are associated with and supportive
of the hypothesis.
During Forward Chaining, the Rule Selection
module selects rules Rd from R that are consistent
with the identified facts F(H). The Logical Deduction module then applies these rules and facts
to derive new conclusions Fd, which are subsequently added to the existing premises. The Fact
_Check module then verifies whether the hypothesis_
can be proved or disproved using the facts. If this
is the case, then the algorithm stops and returns
the result. If not the case, the Confusion Check
8581
-----
module examines the deduced results to identify
any inconsistencies. If different deduction results
emerge at each step, it suggests that further deductions based on these conclusions would lead to a
significant number of branching paths, deviating
from the depth-first approach. In such situations,
the algorithm switches the reasoning mode from
Forward Chaining to Backward Chaining. Similarly, during Backward Chaining, the Rule Selec_tion module identifies rules Ra from R that unifies_
with the hypothesis consequent. The Logical Ab_duction module then applies these rules to derive_
the plausible explanations Fa, which are then updated as the new consequent to be proved. The
_Fact Check module verifies whether the updated_
hypothesis can be proved or disproved using the
facts enriched by Forward Chaining. On the other
hand, the Confusion Check module examines any
inconsistencies are present in Fa to determine if a
change in the reasoning mode is necessary.
**4** **Experimental Setup**
We describe our baselines and datasets here, and
provide further implementation details in Appendix
B. Unless stated otherwise, all experiments are
based on GPT-4 (OpenAI, 2023).
**4.1** **Baselines**
We compare against the following four baselines.
**Standard directly prompts LLM to output labels**
and proofs in an end-to-end manner, showcasing
the lower bound of LLM’s capabilities.
**Chain-of-Thought (CoT) (Wei et al., 2022)**
adopts a step-by-step problem-solving approach,
generating explanations before providing the final
answer. In our work, the indeterminate explanations are the corresponding step-by-step proof.
**Selection-Inference (SI) (Creswell et al., 2022)**
is a forward modular reasoning framework. SI
starts from the facts and rules, it iteratively calls
selection and inference, until the goal can be proved
or disproved.
**Backward Chaining Reasoning (LAMBADA)**
(Kazemi et al., 2023) tackles multi-step reasoning
using backward chaining. LAMBADA starts from
the goal, it recursively selects rules that share the
same consequent as the goal and then decomposes
the goal into sub-goals based on the antecedent of
the selected rules. The recursive selection and decomposition process continues until the sub-goals
can be proved or disproved based on the given facts.
**4.2** **Datasets**
We experiment with four challenging logical reasoning datasets outlined below.
**ProofWriter (Tafjord et al., 2021) is a com-**
monly used synthetic dataset for testing logical
reasoning. We use the ProofWriter OWA dataset
of proof depth 0, 1, 2, 3 and 5. The task is
to determine the provability of the hypothesis as
Proved, Disproved, or Unknown based on the given
premises. Our reported results include two sets:
ProofWriter-PUD, containing all proven examples,
and ProofWriter-PD, excluding examples labeled
as Unknown. In line with the methodology outlined by Kazemi et al. (2023), we employ the first
1000 examples from the test set for our analysis.
**FOLIO (Han et al., 2022) is a challenging**
expert-written dataset with complex first-order
logic reasoning. The problems are mostly aligned
with real-world knowledge and use highly natural
wordings. We use the entire FOLIO test set for
evaluation, consisting of 204 examples.
**AR-LSAT (Zhong et al., 2022) is a challenging**
dataset that focuses on investigating the analytical
reasoning of text. The questions are collected from
the Law School Admission Test from 1991 to 2016.
We use the entire test set of 230 multiple choice
questions. AR-LSAT is particularly challenging,
with state-of-the-art models only achieving performance slightly better than random guessing (Liang
et al., 2022; Ribeiro et al., 2022).
**ParaRules (Clark et al., 2021) modifies from**
ProofWriter where the synthetically generated
premises are rewritten by crowdworkers to increase
diversity and naturalness. Thus, we can surpass the
evaluation of reasoning limited to templatic expressions. The provided examples necessitate proof
depths of up to 5, and the corresponding labels are
Proved, Disproved, or Unknown. We employ the
first 200 examples of the test set for evaluation.
**5** **Results**
We now describe the results and compare BiChainer with the baselines in detail.
**5.1** **Label Prediction Accuracy**
The overall label prediction accuracy results across
various reasoning frameworks are reported in Figure 2 (a)-(e). The Bi-Chainer framework, employing bi-directional chaining techniques, is observed
to significantly outperform both the foundational
reasoning models such as the standard and CoT
8582
-----
(a) ProofWriter-PUD (b) ProofWriter-PD
(c) FOLIO (d) AR-LSAT (e) ParaRules
Figure 2: Label prediction accuracies on (a)-(b) ProofWriter, (c) FOLIO, (d) AR-LSAT, and (e) ParaRules datasets.
frameworks, and the more advanced, modular reasoning systems such as the SI and LAMBADA
frameworks. In the evaluation of the ProofWriterPUD dataset at a reasoning depth of 5, the comparative analysis reveals that Bi-Chainer achieves a relative improvement of 8.9% over the SI framework.
Against the LAMBADA framework, Bi-Chainer
maintains a strong lead with a 6.3% relative improvement. Moreover, the FOLIO dataset, which
presents more difficult real-world reasoning challenges, also reflects the Bi-Chainer framework’s
superior performance. Here, Bi-Chainer records a
relative improvement of 14.1% when compared to
the SI framework. Against the backward-chaining
LAMBADA framework, Bi-Chainer again prevails
with a relative improvement of 6.6%.
In the context of the AR-LSAT dataset, which
involves complex analytical reasoning, the modular reasoning frameworks SI and LAMBADA exhibit lower performance compared to CoT. On the
other hand, Bi-Chainer demonstrates a relative increase of 8.5% in performance compared to CoT.
In ParaRules, the introduction of naturalness and
diversity through paraphrasing might inadvertently
introduce ambiguity of the original premises, resulting in a decrease in the accuracy of label prediction
compared to the ProofWriter dataset. However, BiChainer demonstrates a notable relative improvement of 9.1% over the SI framework and 5.9%
over the LAMBADA framework. This consistent
outperformance across diverse datasets indicates
the adaptability and generalization strength of the
Bi-Chainer framework’s reasoning mechanisms.
**5.2** **Proof Accuracy**
To validate whether each reasoning framework is
susceptible to hallucinations, which involve correct
final label predictions but incorrect intermediate
steps, we conduct an assessment of the proof accuracy. We randomly selected separate sets of 50
examples from Depth-5 of the ProofWriter-PUD
dataset where each reasoning framework predicted
the label correctly and manually verified if the
proof chain is correct or not. In each step, we compared the facts, rules, and resulting conclusions
utilized in the reasoning process to corresponding
steps in the reference reasoning path. A proof chain
is considered to be correct if these elements are consistent with each other. The proof accuracy results
are reported in Figure 3a.
We observe that different reasoning frameworks
demonstrate varying levels of logical reasoning hallucinations in different cases. In general, modular
reasoning frameworks, including SI, Lambada, and
Bi-Chainer, are less affected by implicit patterns
in language models and achieve higher proof accuracy compared to direct proof generation frameworks like CoT. CoT has an average proof accuracy of 68%, while SI achieves 78%. Lambada
demonstrates an impressive proof accuracy of 94%,
and Bi-Chainer surpasses all with the highest average proof accuracy of 98%. Specifically, we observe that whenever reasoning frameworks predict
Proved or Disproved, the prediction is mostly cor
8583
-----
(a) (b)
Figure 3: (a) Proof accuracy results on ProofWriter-PUD (Depth-5) for a set of randomly sampled examples for
which the models correctly predicted the goal. (b) Precision and Recall results for Premise Selection on the selected
samples from the ProofWriter-PUD (Depth-5), with shaded areas indicating the performance gap between different
reasoning frameworks for the Proved, Disproved, and Unknown cases.
(a) ProofWriter (b) FOLIO (c) AR-LSAT (d) ParaRules
Figure 4: Comparing SI, LAMBADA with Bi-Chainer w.r.t. the average number of inference calls they make per
example in different datasets.
rect. The accuracy is slightly more in cases where
the prediction is Disproved. We believe this is because in cases where the result is Disproved, the
reasoning path of the model typically involves accurately identifying contradictions or inconsistencies,
thereby reducing hallucinatory reasoning.
Moreover, in the case of unknown examples, forward chaining frameworks such as CoT and SI face
difficulties in accurately determining the correct
reasoning path, achieving relatively low accuracies
of 52% and 65%, respectively. In contrast, the
Lambada framework uses backward chaining to
capture goal-oriented premises, leading to a significant improvement in achieving 87% accuracy for
unknown cases. On the other hand, the Bi-Chainer
framework employs bidirectional chaining to assist premise selection under the guidance of other
side’s intermediate reasoning results, resulting in
an impressive accuracy of 96%.
In practice, generated reasoning paths often exhibit partial correctness, with errors occurring during the intermediate reasoning process. These errors are mainly attributed to incorrect premise selection, as large models possess powerful single-step
reasoning capabilities. To assess the extent of par
tially correct reasoning, we measure the precision
and recall of unique premises extracted from the
generated proof that are also present in the reference reasoning path. The results are presented in
Figure 3b. In the case of the CoT method, it heavily relies on internal knowledge and rules within
the model to generate proofs, resulting in a limited
selection of premises. Consequently, the method
exhibits higher recall values (around 5.3% higher)
than precision values.
On the contrary, the SI method requires considering all available facts and rules that can be used
for deduction at each step of the reasoning process.
This leads to a larger selection of premises in the
reasoning process, resulting in lower recall values
(around 3.6% lower) compared to precision values.
While most of the reasoning paths in Lambada are
correct, in situations where there are numerous and
complex facts and rules, there may be a process
of error correction. Consequently, the selection
of premises in the proof becomes more diverse,
leading to lower recall values (around 2.3% lower)
compared to precision values. In contrast, the BiChainer method excels in handling scenarios with
a large number of complex facts and rules. It lever
8584
-----
ages the guidance provided by the forward chaining
process, utilizing intermediate results to guide the
backward chaining. This approach mitigates the
occurrence of errors and the need for subsequent
corrections, resulting in both high recall and precision values.
**5.3** **Number of Inference Calls**
Another advantage of Bi-Chainer is its efficiency
compared to other modular reasoning frameworks,
such as SI and Lambada, which often require multiple LLM inference calls per example. In Figure
4, we compare the average number of LLM calls
per example for our datasets. For the ProofWriter
dataset, Bi-Chainer requires 14.25 LLM calls,
which is 1.12 times fewer than Lambada and 1.36
times fewer than SI. In the case of the FOLIO
dataset, which has a limited number of premises,
Bi-Chainer requires 9.22 LLM calls, exhibiting
1.18 times fewer calls than Lambada and 1.89 times
fewer calls than SI. However, the AR-LSAT dataset
poses a different challenge as it contains five options per question. This requires more LLM calls
for evaluating each option, resulting in 26.35 calls
for Bi-Chainer. Despite this, Bi-Chainer still reduces LLM calls by 1.12 times compared to SI
and 1.44 times compared to Lambada. As for
the ParaRules dataset, the presence of paraphrased
premises increases the difficulty of accurately selecting the relevant premises. Consequently, the
number of LLM calls for ParaRules exhibits a decrease compared to the ProofWriter dataset, with
Bi-Chainer requiring 11.78 calls.
**6** **Additional Results**
**Performance on Open-Source Model.** We also
adopt the open-sourced LLaMA2 7B model in
greedy search decoding strategy to supplement
the corresponding experiments on ProofWriter and
ParaRules datasets.
Method ProofWriter (d5) ParaRules
CoT 43.4 33.5
SI 52.6 41.0
Lambada 58.9 43.5
Bi-Chainer **62.3** **48.5**
Table 1: Label accuracy of LLaMA2 7B model on
ProofWriter and ParaRules datasets.
**Individual Module Performance.** To understand which components in Bi-Chainer are responsible for the failure cases, we computed the individ
ual accuracy of the six modules described in Section 3. For this purpose, we randomly sampled 100
examples from the validation set of ProofWriter.
This sampling included 20 examples for each reasoning depth. We then manually wrote the desired outputs for each module. A module prediction is considered correct if it matches our annotations. The performance of modules in Bi-Chainer
is shown in Table 2.
Model FC FI RS LD LA CC
GPT-4 97.82 98.78 91.52 97.44 95.26 97.73
LLaMA2 83.71 86.58 65.54 93.43 89.80 95.29
Table 2: Individual module performance in Bi-Chainer.
The evaluation results indicate that the Fact
Check module (FC), Fact Identify module (FI),
and Confusion Check module (CC) demonstrate a
better performance. On the other hand, the Rule
Selection module (RS) exhibits the lowest performance among all the modules, indicating that the
LLM still faces challenges in effectively selecting
the appropriate rules during the reasoning process.
Additionally, the Logical Abduction module (LA)
performs slightly lower than the Logical Deduction module (LD), suggesting that decomposing
conditions are slightly more difficult for the LLM
compared to making deduction inferences.
**Compare width-first search framework.** Tree
of Thoughts (ToT) reasoning framework (Yao et al.,
2024) performs reasoning and evaluation on each
intermediate result in a tree-searching manner.
Thus, compared to our depth-first bi-directional
searching framework, ToT is a width-first searching framework, resulting in a high number of inference calls. The result comparison between ToT
and Bi-Chainer is shown in Table 3.
Method Accuracy Inference calls
Standard 54.0 1
CoT 61.5 1
ToT 65.0 22.79
SI 65.5 18.24
Lambada 67.5 13.59
Bi-Chainer **72.0** **11.78**
Table 3: ToT performance on ParaRules.
The results demonstrate that ToT surpasses SI
but trails behind Lambada in terms of performance,
and falls even further behind Bi-Chainer. This can
be attributed to ToT’s focus on solving general complex reasoning tasks, rather than being specifically
tailored for goal-oriented tasks like logical reason
8585
-----
ing. Besides, ToT’s reasoning process, which involves tree-searching for each intermediate result,
leads to a significant number of Inference calls.
**Robustness Analysis.** We supplement the label
accuracy result (mean and standard deviation) of
both CoT baseline and our Bi-Chainer under 3 GPT4 runs on the FOLIO and AR-LSAT datasets in
Table 4. We observe that the variance across multiple runs is consistently low compared to the improvement in performance, suggesting that GPT-4
is stable in performing logical reasoning tasks.
Method FOLIO AR-LSAT
CoT 59.64 ± 0.6112 34.93 ± 1.0974
Bi-Chainer **81.24 ± 0.8328** **38.08 ± 0.9351**
Table 4: Label accuracy of CoT and Bi-Chainer on
FOLIO and AR-LSAT dataset. We report the mean and
standard deviation under 3 GPT-4 runs
**7** **Case Study**
We demonstrate a case study to understand the
performance of the Bi-Chainer method compared
to LAMBADA. We give a high-level overview and
abbreviated examples here, leaving full detailed
examples in Appendix A. Lambada experienced
premise confusion, it fails to accurately determine
the appropriate rule for the subsequent inference
step when multiple rules unify with the consequent
of the goal statement. As a result of choosing the
wrong rule, the model was unable to validate the
premise condition, resulting in a wrong conclusion.
Hypothesis: The cow chases the cow.
Step 1: Rule 2: If someone is rough and the
tiger sees the bear then they chase the cow.
Rule 3: If someone likes the tiger then they
chase the cow. Multiple rules unified.
Step 2: Select the shorter rule, Rule 2. Select the wrong rule.
Further steps fail to prove the goal.
Conclusion: Unknown.
**Premise confusion error: Lambada en-**
countered premise confusion where Rule
2 and Rule 3 are both unified with the consequent of the goal statement. The model
erroneously selects Rule 2 with fewer subgoals, leading to further steps that fail to
prove the sub-goal.
**Bi-Chainer for Lambada premise confusion**
Step 1: Identify the facts about the cow, The
cow is blue, and The cow chases the lion.
Step 2: In forward-chaining rule selection, we
have two candidate rules: Rule 1 and Rule 6.
Step 3: Forward-chaining Logical Deduction:
As the cow is blue, we can deduce that the
cow chases the tiger from Rule 1. Additionally,
since it is stated that the cow chases the lion,
we can further deduce that the cows are rough
from Rule 6.
**Detect forward chaining leads to multiple
deductions, switch to backward chaining. **
Step 4: Backward-chaining Rule Selection: we
have two candidate rules: Rule 6: if someone
is rough and the tiger sees the bear, then they
chase the cow. Rule 3 states that if someone
likes the tiger, then they chase the cow.
Step 4: Backward-chaining Logical Abduction:
Using Rule 6 and knowing the cow is rough
and the tiger sees the bear, we can deduce that
the cow chases the cow.
Conclusion: True.
**8** **Conclusion**
We propose the bidirectional chaining method, BiChainer, to overcome the limitations of existing
unidirectional chaining methods for complex logical reasoning. By dynamically switching to depthfirst reasoning in the opposite direction when faced
with multiple branching options, Bi-Chainer leverages intermediate reasoning results to enhance the
reasoning process. In the experiments, Bi-Chainer
demonstrates substantial accuracy improvements
over unidirectional chaining frameworks on challenging datasets. It also improves the accuracy of
intermediate proof steps and reduces the average
number of inference calls, resulting in more efficient and accurate reasoning.
**Acknowledgements**
This work was supported in part by the Research
Grants Council of the Hong Kong SAR under Grant
GRF 11217823 and Collaborative Research Fund
C1042-23GF, the National Natural Science Foundation of China under Grant 62371411, InnoHK initiative, the Government of the HKSAR,Laboratory
for AI-Powered Financial Technologies.
8586
-----
**Limitations**
This paper presents a novel approach for enhancing reasoning capabilities in large language models
through bidirectional chaining. However, it is crucial to acknowledge and address several limitations
inherent in this research:
(1) Scalability: The proposed approach may
face challenges in terms of scalability when applied to large-scale datasets or real-time applications. The computational complexity of bidirectional chaining may hinder its efficiency, potentially limiting its practicality for scenarios requiring
rapid and extensive reasoning.
(2) Dependency on Pretrained Models: The
approach heavily relies on pretrained language
models, which may introduce certain limitations.
Pretrained models are prone to biases and may not
capture all relevant contextual information, leading to potential errors or inaccuracies in reasoning
outcomes. Additionally, the reliance on pretrained
models limits the flexibility and adaptability of the
proposed method to new domains or specialized
contexts.
(3) Lack of Explainability: While bidirectional chaining enhances reasoning capabilities, it
may obscure the interpretability and explainability
of the model’s decision-making process. Understanding the reasoning steps and how conclusions
are reached becomes challenging, hindering transparency and trust in the system. This limitation
may impact the acceptance and adoption of the
proposed approach in critical applications where
interpretability is essential.
(4) Knowledge Acquisition and Representa**tion: The effectiveness of bidirectional chaining**
heavily depends on the availability and quality of
the underlying knowledge base. Incomplete or inaccurate knowledge representations may result in
flawed reasoning or incorrect conclusions. Additionally, the challenge of continuously updating and
maintaining the knowledge base to keep up with
evolving information poses a significant obstacle.
(5) Ethical Considerations: The utilization of
large language models raises ethical concerns, including the potential for generating biased or offensive content. Although bidirectional chaining aims
to enhance reasoning, it does not inherently address
these ethical challenges. Proactive measures, such
as comprehensive content filtering and bias detection mechanisms, should be integrated to mitigate
the risks associated with unintended outputs.
Addressing these limitations is vital for future
research in automated large language models reasoning with bidirectional chaining. Overcoming
scalability issues, ensuring model transparency, improving knowledge acquisition, and addressing ethical considerations will contribute to the broader
adoption and practicality of the proposed approach
in real-world applications.
**Ethics Statement**
This study utilizes publicly available datasets for
our models. Prior research endeavors have generally taken ethical considerations into account. We
have manually inspected a subset of samples and
found no explicit ethical concerns, including violent or offensive content. Nonetheless, it is crucial
to highlight that the output generated by large language models lacks the degree of control we might
assume. Consequently, we are prepared to implement measures to mitigate any unforeseen outputs.
**References**
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin,
Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul
Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2023. Palm: Scaling language
modeling with pathways. Journal of Machine Learn_ing Research, 24(240):1–113._
Hyung Won Chung, Le Hou, Shayne Longpre, Barret
Zoph, Yi Tay, William Fedus, Yunxuan Li, Xuezhi
Wang, Mostafa Dehghani, Siddhartha Brahma, et al.
2022. Scaling instruction-finetuned language models.
_arXiv preprint arXiv:2210.11416._
Peter Clark, Oyvind Tafjord, and Kyle Richardson. 2021.
Transformers as soft reasoners over language. In Pro_ceedings of the Twenty-Ninth International Confer-_
_ence on International Joint Conferences on Artificial_
_Intelligence, pages 3882–3890._
Antonia Creswell, Murray Shanahan, and Irina Higgins.
2022. Selection-inference: Exploiting large language
models for interpretable logical reasoning. In The
_Eleventh International Conference on Learning Rep-_
_resentations._
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and
[Tushar Khot. 2023. Complexity-based prompting for](https://openreview.net/pdf?id=yf1icZHC-l9)
[multi-step reasoning. In The Eleventh International](https://openreview.net/pdf?id=yf1icZHC-l9)
_Conference on Learning Representations, ICLR 2023,_
_Kigali, Rwanda, May 1-5, 2023. OpenReview.net._
Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting
Qi, Martin Riddell, Luke Benson, Lucy Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, et al.
2022. Folio: Natural language reasoning with firstorder logic. arXiv preprint arXiv:2209.00840.
8587
-----
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
Jacob Steinhardt. 2021. Measuring mathematical
problem solving with the math dataset. In Thirty_fifth Conference on Neural Information Processing_
_Systems Datasets and Benchmarks Track (Round 2)._
Jiaxin Huang, Shixiang Gu, Le Hou, Yuexin Wu, Xuezhi
[Wang, Hongkun Yu, and Jiawei Han. 2023. Large](https://aclanthology.org/2023.emnlp-main.67)
[language models can self-improve. In Proceedings](https://aclanthology.org/2023.emnlp-main.67)
_of the 2023 Conference on Empirical Methods in Nat-_
_ural Language Processing, EMNLP 2023, Singapore,_
_December 6-10, 2023, pages 1051–1068. Association_
for Computational Linguistics.
[Jie Huang and Kevin Chen-Chuan Chang. 2023. To-](https://doi.org/10.18653/V1/2023.FINDINGS-ACL.67)
[wards reasoning in large language models: A survey.](https://doi.org/10.18653/V1/2023.FINDINGS-ACL.67)
In Findings of the Association for Computational
_Linguistics: ACL 2023, Toronto, Canada, July 9-14,_
_2023, pages 1049–1065. Association for Computa-_
tional Linguistics.
Mehran Kazemi, Najoung Kim, Deepti Bhatia, Xin
[Xu, and Deepak Ramachandran. 2023. LAMBADA:](https://doi.org/10.18653/v1/2023.acl-long.361)
[Backward chaining for automated reasoning in nat-](https://doi.org/10.18653/v1/2023.acl-long.361)
[ural language. In Proceedings of the 61st Annual](https://doi.org/10.18653/v1/2023.acl-long.361)
_Meeting of the Association for Computational Lin-_
_guistics (Volume 1: Long Papers), pages 6547–6568,_
Toronto, Canada. Association for Computational Linguistics.
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris
Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian
Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language
models. arXiv preprint arXiv:2211.09110.
John McCarthy and STANFORD UNIV CALIF DEPT
OF COMPUTER SCIENCE. 1963. Programs with
common sense.
OpenAI. 2023. [Gpt-4 technical report.](https://api.semanticscholar.org/CorpusID:257532815) _ArXiv,_
abs/2303.08774.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida,
Carroll Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
2022. Training language models to follow instructions with human feedback. _Advances in Neural_
_Information Processing Systems, 35:27730–27744._
Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt,
[Noah A. Smith, and Mike Lewis. 2023. Measuring](https://aclanthology.org/2023.findings-emnlp.378)
[and narrowing the compositionality gap in language](https://aclanthology.org/2023.findings-emnlp.378)
[models. In Findings of the Association for Compu-](https://aclanthology.org/2023.findings-emnlp.378)
_tational Linguistics: EMNLP 2023, Singapore, De-_
_cember 6-10, 2023, pages 5687–5711. Association_
for Computational Linguistics.
Nazneen Fatema Rajani, Bryan McCann, Caiming
Xiong, and Richard Socher. 2019. [Explain your-](https://doi.org/10.18653/V1/P19-1487)
[self! leveraging language models for commonsense](https://doi.org/10.18653/V1/P19-1487)
[reasoning. In Proceedings of the 57th Conference of](https://doi.org/10.18653/V1/P19-1487)
_the Association for Computational Linguistics, ACL_
_2019, Florence, Italy, July 28- August 2, 2019, Vol-_
_ume 1: Long Papers, pages 4932–4942. Association_
for Computational Linguistics.
Danilo Neves Ribeiro, Shen Wang, Xiaofei Ma,
Henghui Zhu, Rui Dong, Deguang Kong, Juliette Burger, Anjelica Ramos, William Yang Wang,
George Karypis, et al. 2022. Street: A multi-task
structured reasoning and explanation benchmark. In
_The Eleventh International Conference on Learning_
_Representations._
Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2021.
Proofwriter: Generating implications, proofs, and
abductive statements over natural language. In Find_ings of the Association for Computational Linguistics:_
_ACL-IJCNLP 2021, pages 3621–3634._
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro,
Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint
_arXiv:2302.13971._
Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan,
and Subbarao Kambhampati. 2022. Large language
models still can’t plan (a benchmark for llms on planning and reasoning about change). In NeurIPS 2022
_Foundation Models for Decision Making Workshop._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V.
Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023. [Self-consistency](https://openreview.net/pdf?id=1PL1NIMMrw)
[improves chain of thought reasoning in language](https://openreview.net/pdf?id=1PL1NIMMrw)
[models. In The Eleventh International Conference](https://openreview.net/pdf?id=1PL1NIMMrw)
_on Learning Representations, ICLR 2023, Kigali,_
_Rwanda, May 1-5, 2023. OpenReview.net._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,
et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural
_Information Processing Systems, 35:24824–24837._
Yixuan Weng, Minjun Zhu, Fei Xia, Bin Li, Shizhu He,
Shengping Liu, Bin Sun, Kang Liu, and Jun Zhao.
[2023. Large language models are better reasoners](https://aclanthology.org/2023.findings-emnlp.167)
[with self-verification. In Findings of the Associa-](https://aclanthology.org/2023.findings-emnlp.167)
_tion for Computational Linguistics: EMNLP 2023,_
_Singapore, December 6-10, 2023, pages 2550–2575._
Association for Computational Linguistics.
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Tom Griffiths, Yuan Cao, and Karthik Narasimhan.
2024. Tree of thoughts: Deliberate problem solving
with large language models. Advances in Neural
_Information Processing Systems, 36._
Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. 2022. Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing
_Systems, 35:15476–15488._
8588
-----
**A** **Additional Results and Analyses**
In this section, we provide some more in-depth
qualitative and quantitative analysis of the results
from our model and the baselines.
**A.1** **Biases of Reasoning Frameworks**
Figure 2 (a)-(b) demonstrates a performance disparity among different reasoning frameworks when
handling datasets with or without Unknown cases.
To gain deeper insights into the inherent biases of
each method, we provide detailed confusion matrices in Figure 5. The results reveal that Bi-Chainer
consistently outperforms other reasoning frameworks across all Proved, Disproved, and Unknown
cases, indicating its ability to achieve accurate and
well-balanced predictions. In contrast, CoT exhibits a noticeable bias in predicting Unknown labels, with 24% of Proved cases and 39% of Disproved cases being misclassified as Unknown. Consequently, in the absence of unknown cases, the
CoT method experiences a decline in model accuracy, while the other methods show an improvement.
Furthermore, we observe that forward chaining
is particularly effective in handling Proved cases,
while backward chaining demonstrates a more significant improvement in handling Disproved cases.
Compared to CoT, the Forward chaining-based SI
method shows a relative improvement of 29% for
Proved cases and 28% for Disproved cases. The
backward chaining-based Lambada method demonstrates a relative improvement of 31% for Proved
cases and an impressive relative improvement of
47% for Disproved cases, which is 1.7 times higher
than the improvement achieved by the SI method.
The Bi-Chainer method, which incorporates bidirectional reasoning, combines the advantages of
forward chaining that aligns with the natural flow
of logical order and backward chaining that focuses
on goal-oriented reasoning. It effectively addresses
situations of uncertainty in one-directional reasoning by timely incorporating intermediate results
from the other side as guidance. This enhances
the probability of selecting accurate premises for
reasoning. Consequently, the Bi-Chainer method
achieves a further relative improvement of 39%
in Proved cases compared to CoT, and a relative
improvement of 54% in Disproved cases.
Wanjun Zhong, Siyuan Wang, Duyu Tang, Zenan Xu,
Daya Guo, Yining Chen, Jiahai Wang, Jian Yin, Ming
Zhou, and Nan Duan. 2022. Analytical reasoning of
text. In Findings of the Association for Computa_tional Linguistics: NAACL 2022, pages 2306–2319._
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V. Le, and Ed H.
[Chi. 2023. Least-to-most prompting enables com-](https://openreview.net/pdf?id=WZH7099tgfM)
[plex reasoning in large language models. In The](https://openreview.net/pdf?id=WZH7099tgfM)
_Eleventh International Conference on Learning Rep-_
_resentations, ICLR 2023, Kigali, Rwanda, May 1-5,_
_2023. OpenReview.net._
8589
-----
Figure 5: Confusion matrices.
**A.2** **Failure Modes of CoT**
**CoT demonstrates a self-bias. CoT tends to rely**
heavily on its internal knowledge and rules during
the reasoning process, disregarding the externally
provided facts and rules.
**(GPT-4 Incorrect CoT): Let’s think step**
by step. From Fact 9 and Rule 4, since the
cat sees the bear and someone who sees the
cat (which could be anyone, including the
cat itself) sees the bear, they eat the dog.
This rule suggests that the cat could eat the
dog because the cat sees the bear and the
cat sees itself, fulfilling the condition of the
rule. Conclusion: Proved
**Self-bias error: CoT makes an erroneous**
inference by concluding that the cat eats
the dog based on its own speculation rather
than the given facts. The model incorrectly
assumes that the cat can see itself, which is
not supported by the provided facts.
**Hallucinatory Proof: However, CoT pre-**
dict the correct label, indicating self-bias
leads to the generation of hallucinatory
proofs.
**CoT fails to make inferences beyond the ex-**
**plicit rules provided. In complex scenarios where**
multiple steps of reasoning or indirect relationships
are involved, CoT tends to focus on surface-level
connections and fails to engage in deep reasoning.
Hypothesis: Karen will share "Stranger
Things" to Lisa.
Premises:
1. "Stranger Things" is a popular Netflix
show.
2. If a Netflix show is popular, Karen will
binge-watch it.
3. If and only if Karen binge-watches a
Netflix show, she will download it.
4. Karen does not download "Black Mirror".
5. "Black Mirror" is a Netflix show.
6. If Karen binge-watches a Netflix show,
she will share it to Lisa.
**(GPT-4 Incorrect CoT): Let’s think step by**
step. "Stranger Things" is a popular Netflix
show, so Karen will binge-watch it (Premise
1, Premise 2). However, there is no direct information provided about whether
Karen has actually binge-watched "Stranger
Things" or whether she has downloaded it.
Therefore, we cannot definitively conclude
whether Karen will share "Stranger Things"
with Lisa based on the given information.
Conclusion: Unknown
**Multi-step reasoning error: CoT makes**
an erroneous inference of the unknown due
to its inability to engage in deeper reasoning.
CoT gives up after failing to find any direct
link between watching "Stranger Things"
and sharing with Lisa.
**A.3** **Failure modes of SI**
**SI encounters over-inference in the reasoning**
**process. SI fails to effectively identify and utilize**
the premise information relevant to validating the
hypothesis, leading to excessive processing and reasoning with all available premises, including those
that do not contribute to the verification process.
This redundancy and unnecessary computation reduce the efficiency and accuracy of SI’s reasoning.
8590
-----
Hypothesis: The squirrel is blue.
Facts:
1. The dog eats the mouse.
2. The dog eats the tiger.
3. The dog visits the squirrel.
4. The mouse is green.
5. The mouse visits the tiger.
6. The squirrel is big.
7. The squirrel is round.
8. The squirrel likes the dog.
9. The tiger eats the dog.
10. The tiger visits the mouse.
Rules:
1. If someone eats the tiger and the tiger is
big then they are green.
2. If someone is green then they like the
squirrel.
3. If the dog is green then the dog likes the
mouse.
4. If someone visits the tiger then the tiger
is blue.
5. If someone visits the tiger then the tiger
visits the dog.
6. If someone is blue and they eat the squirrel then the squirrel is green.
7. If someone is blue then they eat the squirrel.
8. If someone likes the dog and they are
green then they are blue.
**SI reasoning steps:**
Step 1: Analyze Fact 5 and Rule 4.
Fact 5: the mouse visits the tiger. Rule 4:
if someone visits the tiger then the tiger is
blue. Deduction: The tiger is blue. Valid
Step 2: Analyze the deduced fact that the
tiger is blue and Rule 7. Valid
Rule 7: if someone is blue, they eat the
squirrel.
Deduction: The tiger eats the squirrel. Valid
Step 3: Analyze Fact 4 and Rule 2.
Fact 4: the mouse is green. Rule 2: if someone is green then they like the squirrel.
Deduction: Mouse likes the squirrel. Invalid
Step 4: Analyze Fact 8 and Rule 8.
Fact 8: the squirrel likes the dog. Rule 8: if
someone likes the dog and they are green
then they are blue.
Deduction: We do not have information
about the squirrel being green, so Rule 8
does not apply. Wrong deduction
Conclusion: Unknown.
**Over-inference error: Despite deducing**
that the squirrel is green based on the deduction results in Step 1 and 2 using Rule
6, the presence of invalid reasoning Step 3
prevents the model from progressing along
the correct reasoning path within its limited
multi-step reasoning capacity. As a result,
the model incorrectly concludes that Rule 8
cannot be applied.
**A.4** **Failure Modes of LAMBADA**
**Lambada experienced premise confusion LAM-**
BADA fails to accurately determine the appropriate
rule for the subsequent inference step when multiple rules unify with the consequent of the goal
statement. As a result of choosing the wrong rule,
the model was unable to validate the premise condition, resulting in a wrong conclusion.
Hypothesis: The cow chases the bear.
Facts:
1. The bear is blue.
2. The bear is round.
3. The bear sees the cow.
4. The cow is blue.
5. The lion is rough.
6. The lion likes the tiger.
7. The lion sees the bear.
8. The tiger is cold.
9. The tiger is round.
10. The tiger sees the bear.
11. The tiger sees the cow.
Rules:
1. If someone is blue then they chase the
tiger.
2. If the cow is blue and the tiger sees the
bear then the cow chases the lion.
3. If someone likes the tiger then they chase
the lion.
4. If someone likes the lion then the lion
chases the tiger.
5. If the cow is cold and the cow chases the
bear then the bear chases the tiger.
6. If someone chases the cow and they
chase the lion then they chase the bear.
7. If someone is rough then they chase the
cow.
8591
-----
kens to generate to 1024 for FOLIO and 4096 for
ProofWriter, ParaRules, and AR-LSAT. We use
gpt-4-0613 checkpoint of GPT-4 model and invoke
the model via the OpenAI API. We prompt the
model with a set of instructions and 1-8 ICL examples. The examples follow a structured text format
designed to scaffold generations and facilitate postprocessing. Each ICL example begins with a task
description, followed by the NL hypothesis. The
premises are then outlined using numbered statements. The necessary reasoning steps for each
example are subsequently outlined in a separate
section.
**ProofWriter. We utilize a subset of the publicly**
available ProofWriter dataset, specifically the Open
World Assumption (OWA) dataset [1]. Due to the
cost of inference, we used the first 1000 examples
in the test set. In the Closed World Assumption
(CWA) dataset, everything is either proven True or
False. However, in the OWA dataset, if a statement
cannot be proven True or False, it is labeled as Unknown. For Unknown samples, where there is no
explicit reasoning trace, it is essential to enumerate
all possible facts. If the hypothesis has not been
proven or disproven, it is classified as Unknown.
For this reason, we need to manually verify if the
proof chain is correct or not for proof accuracy
analysis.
**FOLIO. We use the publicly available FOLIO**
dataset [2] and use the validation split of the dataset
in our evaluation as the testing split is not publicly
available. The original dataset has 204 validation
examples.
**AR-LSAT. We use the publicly available AR-**
LSAT dataset [3] and use the full test set of 230
examples in our evaluation. The AR-LSAT dataset
differs from other datasets in that its labels are not
fixed. Each example in the AR-LSAT dataset consists of five options associated with a question. To
address this, during the prompting process, we concatenate the question with each option to form a
hypothesis. Consequently, each AR-LSAT example has five hypotheses that need to be validated.
However, the results obtained from validating earlier hypotheses are added to the premises to reduce
redundant reasoning among multiple hypotheses.
**Pararules. We use a subset of the publicly avail-**
1https://allenai.org/data/proofwriter
2https://github.com/Yale-LILY/FOLIO
3https://github.com/zhongwanjun/ARLSAT/tree/main/data
8. If someone is cold then they are blue.
9. If someone is blue and they chase the
lion then they are rough.
**LAMBADA reasoning steps:**
Step 1: Select Rule 6, If someone chases
the cow and they chase the lion then they
chase the bear.
Step 2: We need to prove the cow chases
the cow and they chase the lion.
Step 3: To prove the cow chases the cow,
select Rule 7: If someone is rough then they
chase the cow.
Step 4: We need to prove the cow is rough.
Step 5: To prove the cow is rough, select
Rule 9: If someone is blue and they chase
the lion then they are rough.
Step 6: By checking the facts, we know that
the cow is blue (Premise 4).
Step 7: We need to prove the cow chases
the lion.
Step 8: To prove the cow chases the lion, we
have two candidate rules. Rule 2: If the cow
is blue and the tiger sees the bear then the
cow chases the lion; and Rule 3: If someone
likes the tiger then they chase the lion.
Step 9: As Rule 3 has fewer sub-goals, we
start with proving the cow likes the tiger.
Select the wrong rule based on the Rerank
strategy in LAMBADA.
Step 10: Based on the given information,
we were unable to find a rule or fact that directly connects or unifies with the statement
"The cow likes the tiger." Therefore, the
truth or validity of this statement remains
unknown based on the provided context.
Conclusion: Unknown.
**Premise confusion error: Lambada en-**
countered premise confusion where Rule
2 and Rule 3 are both unified with the consequent of the goal statement. The model
erroneously selects Rule 3 with fewer subgoals, leading to further steps that fail to
prove the sub-goal.
**B** **Implementation Details**
For our experiments, we used the GPT-4 (OpenAI, 2023) for all the models (both Bi-Chainer
and the baselines). The decoding temperature was
set to 0.1. we limit the maximum number of to
8592
-----
able ParaRules dataset [4], specifically the parallel
dataset that runs through the Problog reasoner that
produced the same labels. The subset consists of
the first 200 examples from the test set, with a
reasoning depth of 5. Table 5 provides a comprehensive summary of the examples utilized in our
study, which are derived from four distinct datasets
representing three different types of logical reasoning problems.
**C** **Few-Shot Prompts**
We select representative samples from the training
split of the dataset as our few-shot examples. These
samples are chosen to ensure a balanced representation across different labels. As the training set
lacks correct proofs, we manually provide the corresponding proof for each example. We utilize the
FOLIO dataset for demonstrating prompts across
different reasoning frameworks due to its limited
number of premises, which facilitates the presentation.
**C.1** **Chain-of-Thought Prompting**
Task Description:
Given a set of premises, you have to reason
whether the hypothesis is true, false, or
unknown.
Hypothesis:
In La Liga 2021-2022, Real Madrid ranks
higher than Barcelona.
Premises:
1: A La Liga soccer team ranks higher than
another if it receives more points.
2: If two La Liga soccer teams recieve the
same points, the team which recieves more
points from the games between the two
teams ranks higher.
3: Real Madrid and Barcelona are both La
Liga soccer teams.
4: In La Liga 2021-2022, Real Madrid
recieves 86 points and Barcelon recieves 73
points.
5: In La Liga 2021-2022, Real Madrid and
Barcelona both recieve 3 points from the
games between them.
4https://allenai.org/data/ruletaker
Reason:
Let’s think step by step. As indicated by
Premise 3, Real Madrid and Barcelona
are both La Liga soccer teams. From
premise 4, Real Madrid received 86 points,
and Barcelona received 73 points. This
implies Real Madrid has more points
than Barcelona. From premise 1, if a
team receives more points, it ranks higher.
Therefore, Real Madrid ranks higher than
Barcelona based on points.
Answer:
True
**C.2** **Selection-Inference Prompting**
SI framework iteratively calls the selection and
inference module. The selection prompt is:
Task Description:
Given a set of premises, you have to reason
whether the hypothesis is true, false, or
unknown. To prove the hypothesis, you
need to select the premises where new
conclusions can be derived toward proving
the goal.
Hypothesis:
In La Liga 2021-2022, Real Madrid ranks
higher than Barcelona.
Premises:
1: A La Liga soccer team ranks higher than
another if it receives more points.
2: If two La Liga soccer teams recieve the
same points, the team which recieves more
points from the games between the two
teams ranks higher.
3: Real Madrid and Barcelona are both La
Liga soccer teams.
4: In La Liga 2021-2022, Real Madrid
receives 86 points and Barcelon recieves 73
points.
5: In La Liga 2021-2022, Real Madrid and
Barcelona both received 3 points from the
games between them.
Selected Premises:
Step 1: Premise 3: Real Madrid and
Barcelona are both La Liga soccer teams.
8593
-----
Step 2: Premise 4, Real Madrid received
86 points, and Barcelona received 73 points.
Step 3: Premise 1, If a team receives more
points, it ranks higher.
The inference prompt is:
Task Description:
Derive the inferences based on the selected
premises.
Inferences:
Step 1: From Premises 4, Real Madrid received 86 points, and Barcelona received 73
points. This implies Real Madrid has more
points than Barcelona.
Step 2: From Premise 1: If a team receives
more points, it ranks higher. We know
that Real Madrid receives more points than
Barcelona. Therefore, Real Madrid ranks
higher than Barcelona.
**C.3** **LAMBADA Prompting**
LAMBADA employs backward chaining with four
modules: Fact Check, Rule Selection, Goal Decomposition, and Sign Agreement. We add instructions
for LAMBADA to align with our method, the additional instructions only summarize the main idea
of each module.
The prompt for Fact Chek is:
Task Description:
Given a set of premises, you have to
reason whether the hypothesis is true, false,
or unknown. To prove the hypothesis,
you need to check the premises whether
the hypothesis can be directly proved or
disproved by one of the premises.
Hypothesis:
In La Liga 2021-2022, Real Madrid ranks
higher than Barcelona.
Premises:
...
6: In La Liga 2021-2022, Real Madrid
ranks higher than Barcelona.
Fact Check:
8594
The hypothesis can be directly proved by
Premise 6.
The prompt for Rule Selection is:
Task Description:
Given a set of premises, you have to reason
whether the hypothesis is true, false, or
unknown. To prove the hypothesis, you
need to select the rules that share the
consistent consequences as the hypothesis.
Hypothesis:
In La Liga 2021-2022, Real Madrid ranks
higher than Barcelona.
Premises:
1: A La Liga soccer team ranks higher than
another if it receives more points.
2: If two La Liga soccer teams recieve the
same points, the team which recieves more
points from the games between the two
teams ranks higher.
3: Real Madrid and Barcelona are both La
Liga soccer teams.
4: In La Liga 2021-2022, Real Madrid
recieves 86 points and Barcelon recieves 73
points.
5: In La Liga 2021-2022, Real Madrid and
Barcelona both recieve 3 points from the
games between them.
Rule Selection:
Premise 1, A La Liga soccer team ranks
higher than another if it receives more
points. or
Premise 2: If two La Liga soccer teams
recieve the same points, the team which recieves more points from the games between
the two teams ranks higher.
The prompt for Goal Decomposition is:
Task Description:
Analyze the plausible sub-goals for the
selected rules.
Hypothesis:
In La Liga 2021-2022, Real Madrid ranks
higher than Barcelona.
-----
Decomposed Sub-Goals:
According to Premise 1, if we want to prove
a La Liga soccer team ranks higher than another, we need to prove the La Liga soccer
team receives more points. or
According to Premise 2, if we want to prove
a La Liga soccer team ranks higher than another, we need to prove two La Liga soccer
teams receive the same points, and one of
them receives more points from the games
between the two teams.
The prompt for Sign-Agreement is:
Task Description:
Check whether the consequence of the
rule agrees or disagrees with the hypothesis.
Hypothesis:
In La Liga 2021-2022, Real Madrid ranks
higher than Barcelona.
Rule:
In La Liga 2021-2022, Real Madrid ranks
higher than Barcelona.
Agreement Sign:
Agree.
**C.4** **Bi-Chainer Prompting**
Bi-Chainer employs bi-directional chaining with
six modules: Fact Check, Fact Identify, Rule Selection, Logical Deduction, Logical Abduction, and
Confusion Check. The prompt for the Fact Check
module in our approach aligns with the prompt
used in LAMBADA, as presented above.
The prompt for Fact Identify is:
Task Description:
Given a set of premises, you have to reason
whether the hypothesis is true, false, or
unknown. To prove the hypothesis, you
need to identify the premises where new
conclusions can be derived toward proving
the goal.
Hypothesis:
In La Liga 2021-2022, Real Madrid ranks
higher than Barcelona.
Premises:
1: A La Liga soccer team ranks higher than
another if it receives more points.
2: If two La Liga soccer teams recieve the
same points, the team which recieves more
points from the games between the two
teams ranks higher.
3: Real Madrid and Barcelona are both La
Liga soccer teams.
4: In La Liga 2021-2022, Real Madrid
recieves 86 points and Barcelon recieves 73
points.
5: In La Liga 2021-2022, Real Madrid and
Barcelona both recieve 3 points from the
games between them.
Fact Identify:
3: Real Madrid and Barcelona are both La
Liga soccer teams.
4: In La Liga 2021-2022, Real Madrid recieves 86 points and Barcelon recieves 73
points.
5: In La Liga 2021-2022, Real Madrid and
Barcelona both recieve 3 points from the
games between them.
The prompt for Rule Selection in Forward Chaining is:
Task Description:
Given a set of premises, you have to reason
whether the hypothesis is true, false, or unknown. To prove the hypothesis, you need
to select the rules whose conditions entail
the identified facts and whose consequents
entail the consequent of the hypothesis. If a
rule satisfying these criteria is found, return
it as the result. Otherwise, return only the
rules that are entailed by the identified facts.
Hypothesis:
In La Liga 2021-2022, Real Madrid ranks
higher than Barcelona.
Premises:
1: A La Liga soccer team ranks higher than
another if it receives more points.
2: If two La Liga soccer teams recieve the
same points, the team which recieves more
8595
-----
points from the games between the two
teams ranks higher.
3: Real Madrid and Barcelona are both La
Liga soccer teams.
4: In La Liga 2021-2022, Real Madrid
recieves 86 points and Barcelon recieves 73
points.
5: In La Liga 2021-2022, Real Madrid and
Barcelona both recieve 3 points from the
games between them.
Rule Selection:
Premise 1, A La Liga soccer team ranks
higher than another if it receives more
points.
The prompt for Logical Deduction:
Task Description:
Derive the inferences based on the selected
premises.
Inferences:
We know that Real Madrid receives more
points than Barcelona (Premise 4). Therefore, Real Madrid ranks higher than
Barcelona (Premise 1).
The prompt for Rule Selection in Backward
Chaining is:
Task Description:
Given a set of premises, you have to reason
whether the hypothesis is true, false, or
unknown. To prove the hypothesis, you
need to select the rules whose consequences
entail the consequence of the hypothesis.
Hypothesis:
In La Liga 2021-2022, Real Madrid ranks
higher than Barcelona.
Premises:
1: A La Liga soccer team ranks higher than
another if it receives more points.
2: If two La Liga soccer teams recieve the
same points, the team which recieves more
points from the games between the two
teams ranks higher.
8596
3: Real Madrid and Barcelona are both La
Liga soccer teams.
4: In La Liga 2021-2022, Real Madrid
recieves 86 points and Barcelon recieves 73
points.
5: In La Liga 2021-2022, Real Madrid and
Barcelona both recieve 3 points from the
games between them.
Rule Selection:
Premise 1, A La Liga soccer team ranks
higher than another if it receives more
points. or
Premise 2: If two La Liga soccer teams
recieve the same points, the team which
recieves more points from the games
between the two teams ranks higher.
The prompt for Logical Abduction:
Task Description:
Analyze the plausible explanations for the
selected rules.
Plausible Reasons:
According to Premise 1, if we want to prove
a La Liga soccer team ranks higher than another, we need to prove the La Liga soccer
team receives more points. or
According to Premise 2, if we want to prove
a La Liga soccer team ranks higher than another, we need to prove two La Liga soccer
teams receive the same points, and one of
them receives more points from the games
between the two teams.
The prompt for Confusion Check:
Task Description:
Check whether each reasoning step produces consistent deduction or induction
results after applying the selected rules.
Abduction Results:
According to Premise 1, if we want to
prove a La Liga soccer team ranks higher
than another, we need to prove the La Liga
soccer team receives more points. or
According to Premise 2, if we want to
-----
prove a La Liga soccer team ranks higher
than another, we need to prove two La Liga
soccer teams receive the same points, and
one of them receives more points from the
games between the two teams.
Confusion Check:
True
8597
-----
Problem Example Dataset
Premises:
1. The bear sees the mouse.
2. The cow visits the dog.
3. The dog visits the cow.
4. The mouse chases the bear.
5. The mouse chases the dog.
6. The mouse is young.
7. The mouse sees the bear.
8. If the mouse is rough and the mouse sees the cow then the mouse is not round.
9. If someone chases the mouse then they see the mouse.
10. If someone is big then they see the dog.
11. If someone is cold and they do not visit the mouse then the mouse sees the dog.
12. If someone sees the mouse then they are big.
13. If someone is young and they visit the cow then the cow does not visit the dog.
14. If someone sees the dog and the dog visits the cow then the cow sees the mouse.
15. If someone sees the dog then the dog sees the bear.
Hypothesis:
The bear is not big.
Premises:
1: "Stranger Things" is a popular Netflix show
2: If a Netflix show is popular, Karen will binge-watch it
3: If and only if Karen binge-watches a Netflix show, she will download it
4: Karen does not download "Black Mirror"
5: "Black Mirror" is a Netflix show
6: If Karen binge-watches a Netflix show, she will share it to Lisa
Hypothesis:
Karen will share "Stranger Things" to Lisa.
Premises:
1. The organizer of a reading club will select at least five and at most six works
from a group of nine works.
2. The group consists of three French novels, three Russian novels, two French plays,
and one Russian play.
3. No more than four French works are selected.
4. At least three but no more than four novels are selected.
5. At least as many French novels as Russian novels are selected.
6. If both French plays are selected, then the Russian play is not selected.
Hypothesis 1: No Russian novels are selected.
Hypothesis 2: Exactly one French novel is selected.
Hypothesis 3: All three plays are selected.
Hypothesis 4: All three Russian novels are selected.
Hypothesis 5: All five French works are selected.
Deductive
Reasoning
First-Order
Logic
Analytical
Reasoning
ProofWriter
ParaRules
FOLIO
AR-LSAT
Table 5: A summary of the examples we use for the four datasets in our study, representing three different types of
logical reasoning problems.
8598
-----
| [
"Shuqi, Liu",
"Vivek, Srikumar",
"Bowei, He",
"Linqi, Song",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Findings | false | 0 | 0 | null | https://aclanthology.org/2024.findings-acl.507 | https://arxiv.org/abs/2406.06586 | https://www.semanticscholar.org/paper/90975814f51cbca3e838a1e00358492ba4d87ca6 |
Boosting Large Language Models with Socratic Method for Conversational Mathematics Teaching | With the introduction of large language models (LLMs), automatic math reasoning has seen tremendous success. However, current methods primarily focus on providing solutions or using techniques like Chain-of-Thought to enhance problem-solving accuracy. In this paper, we focus on improving the capability of mathematics teaching via a Socratic teaching-based LLM (\texttt{SocraticLLM}), which guides learners toward profound thinking with clarity and self-discovery via conversation. We collect and release a high-quality mathematical teaching dataset, named \texttt{SocraticMATH}, which provides Socratic-style conversations of problems with extra knowledge. Also, we propose a knowledge-enhanced LLM as a strong baseline to generate reliable responses with review, guidance/heuristic, rectification, and summarization. Experimental results show the great advantages of \texttt{SocraticLLM} by comparing it with several strong generative models. The codes and datasets are available on \url{https://github.com/ECNU-ICALK/SocraticMath}. | This paper proposes a knowledge-enhanced LLM as a strong baseline to generate reliable responses with review, guidance/heuristic, rectification, and summarization, and shows the great advantages of \texttt{SocraticLLM} by comparing it with several strong generative models. | [
"Hanglei, Hu",
"Jie, Zhou",
"Yuyang, Ding",
"Qin, Chen",
"Bo, Jiang",
"Liang, He"
] | 2024-07-24T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2407.17349v1 | https://arxiv.org/abs/2407.17349 | https://www.semanticscholar.org/paper/fcc406dcbf883770e22b78ac8ebfa4a24c1536bd |
|
Brain-Inspired Two-Stage Approach: Enhancing Mathematical Reasoning by Imitating Human Thought Processes | Although large language models demonstrate emergent abilities in solving math word problems, there is a challenging task in complex multi-step mathematical reasoning tasks. To improve model performance on mathematical reasoning tasks, previous work has conducted supervised fine-tuning on open-source models by improving the quality and quantity of data. In this paper, we propose a novel approach, named Brain, to imitate human thought processes to enhance mathematical reasoning abilities, using the Frontal Lobe Model to generate plans, and then employing the Parietal Lobe Model to generate code and execute to obtain answers. First, we achieve SOTA performance in comparison with Code LLaMA 7B based models through this method. Secondly, we find that plans can be explicitly extracted from natural language, code, or formal language. Our code and data are publicly available at https://github.com/cyzhh/Brain. | A novel approach, named Brain, to imitate human thought processes to enhance mathematical reasoning abilities, using the Frontal Lobe Model to generate plans, and then employing the Parietal Lobe Model to generate code and execute to obtain answers. | ## Brain-Inspired Two-Stage Approach: Enhancing Mathematical Reasoning by Imitating Human Thought Processes
**Yezeng Chen[3][1,2], Zui Chen[3][1,2], Yi Zhou[♣][3]**
1School of Information Science and Technology, ShanghaiTech University
2Shanghai Innovation Center for Processor Technologies
3School of Information Science and Technology, University of Science and Technology of China
{chenyz2022, chenzui2022}@shanghaitech.edu.cn;
[email protected]
step mathematical reasoning tasks. If we consider
a LLM as a human brain, then pre-training is akin
to enhancing the general cognitive abilities of the
entire brain. Fine-tuning and prompting are analogous to optimizing the hippocampus, reinforcing
the forms of knowledge input and memory. Verification is like enhancing the ability of the frontal and
parietal lobes, consolidating and refining learned
knowledge through testing and error correction.
Recent works (Yue et al., 2023; Gou et al., 2023;
Yu et al., 2023b) attempt to enhance the ability of
LLMs in complex multi-step mathematical reasoning tasks by increasing the amount and improving the quality of supervised fine-tuning (SFT)
training data, we believe that simply activating
the hippocampus of the LLMs is not enough. It
is necessary to activate the corresponding ability
in each brain region. Specifically, by activating
the frontal and parietal lobes respectively, we can
correspondingly activate the ability to understand
problem-solving strategies and to understand coding, thereby improving the ability of LLMs in complex multi-step mathematical reasoning tasks.
Process Reward Model (PRM) (Zhu et al.,
2023a; Lightman et al., 2023; Wang et al., 2023c)
and Lean Reward Model (LRM) can significantly
reduce the occurrence of erroneous steps and thus
enhance the performance of LLMs. PRM evaluates
the reasoning paths step-by-step, while LRM enables the model to transform its generated Chain of
Thought (CoT) process into a Lean [1] format, then
assesses the correctness of the process through the
Lean calculation results. The manual annotation required for PRM, especially for complex multi-step
reasoning tasks, demands high annotator skills and
is costly, with LRM being even more expensive.
Additionally, these two types of validation models
all train the "frontal lobe" and "parietal lobe" regions of LLMs simultaneously, meaning the LLM
cannot concurrently have the abilities of both plan
1https://lean-lang.org/
**Abstract**
Although large language models demonstrate
emergent abilities in solving math word problems, there is a challenging task in complex
multi-step mathematical reasoning tasks. To
improve model performance on mathematical
reasoning tasks, previous work has conducted
supervised fine-tuning on open-source models
by improving the quality and quantity of data.
In this paper, we propose a novel approach,
named Brain, to imitate human thought processes to enhance mathematical reasoning abilities, using the Frontal Lobe Model to generate
plans, and then employing the Parietal Lobe
Model to generate code and execute to obtain
answers. First, we achieve SOTA performance
in comparison with Code LLaMA 7B based
models through this method. Secondly, we
find that plans can be explicitly extracted from
natural language, code, or formal language.
Our code and data are publicly available at
[https://github.com/cyzhh/Brain.](https://github.com/cyzhh/Brain)
**Introduction**
Although Large Language Models (LLMs) possess emergent abilities, including a certain level
of Math Word Problem-solving ability in mathematical reasoning, whether through pre-training
(Touvron et al., 2023; Rozière et al., 2023; OpenAI,
2023; Anil et al., 2023), few-shot learning with
prompts (Zhang et al., 2023; Zheng et al., 2023;
Zhu et al., 2023b; Wang et al., 2023b), through finetuning (Wang et al., 2023a; Yuan et al., 2023; Luo
et al., 2023) or verification (Deng et al., 2023; Wu
et al., 2023b; Wang et al., 2023c; Romera-Paredes
et al., 2023), they lack strong logical reasoning
skills and face challenges in complex multi-step
mathematical reasoning tasks.
Previous research has explored various methods
to expand the abilities of LLMs in complex multi
_3_
Equal Contribution.
_♣_ Corresponding Authors.
-----
Figure 1: Brain, using a combined approach of the Frontal Lobe Model and the Parietal Lobe Model to simulate the
human problem-solving thought process.
ning and reasoning calculation effectively.
However, we believe that the logical abilities of
LLMs in mathematical reasoning have not been
fully demonstrated, as LLMs cannot simultaneously manage both planning and reasoning calculation. In this paper, we aim to address the following research questions: (♠1) Does the output of
LLMs contain a plan? If so, can it be explicitly
extracted? (♠2) How can we use the plan to solve
complex reasoning tasks? (♠3) How can we construct higher-quality plans? Is a high-quality plan
necessarily useful?
To address this challenge and overcome the limitation of LLMs’ ability, in this paper, we propose a
two-stage technique called Brain, whose overview
is shown in Figure 1. It involves constructing a
novel step-level model framework for solving mathematical reasoning tasks to simulate the human
approach to coding for solving real-world issues:
1) Inspired by the use of high-level directives and
meta-prompting to guide the LM in breaking down
complex tasks into smaller, more manageable subtasks (Hong et al., 2023b; Suzgun and Kalai, 2024),
we decompose complex mathematical reasoning
problems into two steps using the same LLM: planning based on the question, followed by code generation based on the plan. 2) These two tasks are
assigned to specialized models for the Frontal Lobe
Model for decision-making and the Parietal Lobe
Model for code structure and logical flow. We use
Direct Preference Optimization (DPO) to optimize
the Models to automatically select plans that align
more closely with the question.
Overall, in this paper, our contributions are as
follows:
- We propose a novel approach Brain that imitate human brain thought processes to enhance
mathematical reasoning ablilities.
- Brain achieves SOTA performance in comparison with Code LLaMA 7B based models
with zero-shot.
- We find that plans can be explicitly extracted
from natural language, code, or formal language.
**2** **Related Work**
In this section, we provide a brief overview of the
progress made in mathematical reasoning tasks,
emphasizing their relevance and connection to our
work.
**Reasoning Format. Recent works have utilized**
natural language (Yu et al., 2023a; Zhang et al.,
2023; Zhu et al., 2023b), code (Wu et al., 2023b;
Yue et al., 2023; Wang et al., 2023a; Gou et al.,
2023), and formal language (Gao et al., 2023; Trinh
et al., 2024) to solve complex mathematical reasoning tasks. A two-stage training method (Shao et al.,
2024) has demonstrated that code-based training is
beneficial for program-assisted mathematical reasoning, thereby directly leveraging the contextual
learning abilities of LLMs to generate more precise
and rigorous deductive reasoning.
**Planning. Despite significant progress in en-**
hancing the capability of LLMs to handle com
-----
Figure 2: The overview of our proposed method, Brain.
plex reasoning tasks through appropriate prompting for global planning (Yao et al., 2023), construction of step-by-step reasoning chains (Wang et al.,
2023c; Khot et al., 2023), and the use of external tools (Gou et al., 2023; Yue et al., 2023; Shao
et al., 2024), these methods still have some limitations. Recent research (Suzgun and Kalai, 2024)
has made strides by using high-level directives and
meta-prompting to guide LMs in breaking down
complex tasks into smaller, more manageable subtasks. However, these methods not only require
more resources and greater time complexity but
also cannot fully activate the large model’s abilities
in mathematical reasoning.
**Self-Verification.** Some methods (Yu et al.,
2023a; Deng et al., 2023) calibrate through answers
is insufficient for LLMs to generate high-quality
reasoning steps and answers. Therefore, we need
to focus on fine-grained step-level self-verification
(Wu et al., 2023a). Current research focuses on
reverse verification by masking numbers in reasoning paths (Wu et al., 2023b; Li et al., 2023),
adding realism to predictions by post-editing reasoning chains based on external knowledge or tools
(Zhao et al., 2023; Sun et al., 2023), modifying
reasoning steps without external feedback (Hong
et al., 2023a; Huang et al., 2023) and decomposing reasoning steps and self-correct each step (Paul
et al., 2023; Ling et al., 2023). Also, research has
been conducted to optimize self-verification performance through Process Reward Model (PRM),
including utilizing human annotators (Zhu et al.,
2023a; Lightman et al., 2023), Monte Carlo Tree
Search (Wang et al., 2023c; Hao et al., 2023),
and InternLM2-Math [2], which introduces Outcome
Reward Model (ORM), Process Reward Model
(PRM), and Lean as Reward Model (LRM) simultaneously. It is demonstrated that more fine-grained
internal and external feedback can significantly improve the accuracy, quality, and stability of model
outputs.
**3** **Brain**
**3.1** **Overview**
Figure 2 shows the overview of training Brain. In
this section, we want to emphasize three key points:
1. Utilizing prompts to obtain a large number
of high-quality plan datasets and preference
datasets from LLMs (§3.2);
2. Training the frontal lobe model to better generate high-quality Plans from problems (§3.3);
3. Training the parietal lobe model to better generate high-quality code-form reasoning paths
from Plans, ultimately achieving highly accurate answers (§3.4).
2https://github.com/InternLM/InternLM-Math
-----
**The few-shot prompt C for GSM8K high-quality plan creating.**
**R: Let’s write a plan based on the python program for solving math problems.**
**Question:**
Richard lives in an apartment building with 15 floors. Each floor contains 8 units, and 3/4 of
the building is occupied. What’s the total number of unoccupied units In the building?
**Program:**
”’python
def solution():
num_floors = 15
num_units_per_floor = 8
num_total_floors = num_floors * num_units_per_floor
num_units_occupied = num_floors * num_units_per_floor * 0.75
num_units_unoccupied = num_total_floors - num_units_occupied
return num_units_unoccupied
”’
**Plan:**
To solve the problem follow these steps:
1. Determine the total number of units in the building. We can multiply the number of units per
floor by the total number of floors.
2. Determine the number of occupied units. We can multiply the total number of units by the
fraction representing the occupied units.
3. Determine the number of unoccupied units. We can subtract the number of occupied units
from the total number of units.
...
Figure 3: The few-shot prompt Cfor GSM8K high-quality plan creating. The black color text is the requirement R.
The teal and blue color text is the one of example pairs, which contains input x1 including example question and
program and the output y1 including example plan.
**3.2** **Prompting**
We obtained a large number of high-quality plan
datasets and preference datasets from the large
model gpt-3.5-turbo-1106 using prompts, based
on the following observations:
- To mimic the human brain’s problem-solving
approach, we require a substantial amount of
high-quality datasets to supervised fine-tune
and direct preference optimize the Frontal
Lobe Model and the Parietal Lobe Model. Using the model gpt-3.5-turbo-1106 can generate datasets of fairly high quality while ensuring minimal costs;
- Structuring the inputs and outputs of the Models facilitates easier parsing the plan datasets
and the preference dataset that we create.
We design different few-shot prompts for various steps as outlined in Figure 2. The prompts
frequently used in Brain are shown in Figure 3 and
4, and all other prompts used in the experiments
will be presented in the Appendix C.
We provide prompt C, along with the corresponding task input x, and generate output y from
GPT:
_|y|_
_gpt(yt_ _C, x, y<t),_ (1)
_G_ _|_
_t=1_
Y
_G(y|C, x) =_
where C integrates three completely different
human-annotated example pairs Pairi and the customized requirement R for the tasks:
Pairi = (xi, yi), (2)
_C = R_ Pair1 Pair2 Pair3, (3)
_⊕_ _⊕_ _⊕_
Based on two different tasks: generating highquality plan datasets and high-quality preference
datasets, we designed two different few-shot
_′_
prompts C shown in Figure 3 and C shown in
Figure 4.
In Prompt C, the input xi includes the question
and solution, and the output yi represents the plan.
_′_
Meanwhile, in prompt C, the input xi includes the
question and plan, and the output yi represents the
score and reasons.
-----
_′_
**The few-shot prompt C** **for GSM8K high-quality score dataset creating**
**R: Review the following Question, Plan and the corresponding response Solution using the ad-**
ditive 3-pointscoring system described below. Points are accumulated based on the satisfaction
of eachcriterion:
1. Points are awarded according to how well the logic of the Plan is aligned with the Question.
2. Points are awarded according to how the Plan is non-redundant and non-duplicate.
3. Points are awarded according to how the necessary steps and omit key details or steps.
Based on the given Question and Plan, briefly justify your total score. Conclude with the score
using the format: Score: [<total points>].Remember to assess from the AI Assistant perspective,
and evaluate the response in alignment with this additive scoring model, we’ll systematically
attribute points based on the outlined criteria.
**Question:**
Josh decides to try flipping a house. He buys a house for $80,000 and then puts in $50,000 in
repairs. This increased the value of the house by 150%. How much profit did he make?
**Plan:**
1. Determine the value of the house before the repairs. We can subtract the cost of the repairs
from the original price.
2. Determine the value of the house after the repairs. We can multiply the value of the house
before the repairs by 150%.
3. Determine the profit. We can subtract the cost of the repairs from the value of the house after
the repairs.
**Solution:**
1. Alignment of Plan with the Question:
The plan starts by determining the value of the house before the repairs, which is unnecessary
since the question already states that Josh bought the house for $80,000. This step does not
align well with the question.
The second step in the plan suggests multiplying the value of the house before the repairs by
150%. This is a misinterpretation of the question. The question states that the value of the
house increased by 150% due to repairs, not that its final value is 150% of the original price.
The third step, determining the profit by subtracting the cost of repairs from the value of
the house after repairs, aligns with the question’s requirement but is based on an incorrectly
calculated value of the house after repairs.
Score for this criterion: 0
2. Non-redundancy and Non-duplication:
The first step in the plan is redundant as the initial value of the house is already given in the
question.
The other steps are not redundant but are based on an incorrect understanding of the question.
Score for this criterion: 0.5
3. Completeness and Detail:
The plan fails to correctly outline the necessary steps for solving the problem. The misinterpretation of how the 150% increase in value applies significantly impacss and accuracy of the plan.
The plan does cover tts the completenehe concept of calculating profit, which is a required step,
but it is based on incorrect prior steps.
Score for this criterion: 0
Based on these assessments, the total score is calculated as the sum of the points from each
criterion. Score: [0 + 0.5 + 0] = [0.5]
...
_′_
Figure 4: The few-shot prompt C for GSM8K high-quality score dataset creating. The black color text is the
requirement R. The teal and blue color text is the one of example pairs, which contains input x1 including example
question and example plan z1 and the output y1 including example reasons and score.
-----
**3.3** **Frontal Lobe Model**
In this subsection, we introduce a method for training the Frontal Lobe Model, enabling it to better
generate high-quality Plans from problems. This
ensures an improvement in accuracy when the parietal lobe model performs reasoning.
To address the issue of LLMs generating Plans
of low quality when solving complex reasoning
tasks, we adopt a straightforward method to optimize fine-tuned model, known as Direct Preference
Optimization (DPO) (Rafailov et al., 2023), which
allows for the extraction of its optimal policy in
closed form without the need for a reinforcement
learning training loop.
**Datasets. We utilized four models from the**
ToRA series: ToRA-Code 7B, ToRA-Code 13B,
ToRA-Code 34B, and ToRA 70B, to perform 100
inference samplings on the GSM8K train set, resulting in 3,000K reasoning paths. Then, we deduplicated and filtered all the obtained reasoning paths,
ultimately obtaining 90K distinct and correct reasoning paths dataset D.
To enhance the model’s generalization ability
and performance within the domain, it is crucial
to ensure that the source dataset for SFT and DPO
are different. From the dataset D comprising 7.5K
questions, we extracted all reasoning paths of 5K
questions as the source dataset Dsft for creating
the SFT dataset and all reasoning paths of 2.5K
questions as the source dataset Ddpo for creating
the preference dataset.
**Supervised Fine-Tuning.** To obtain highquality plans, we utilize prompt C with the GPT
model gpt-3.5-turbo-1106 to perform one round
of inference on the source data Dsft, generating
the plan dataset Psft. Subsequently, we conduct
SFT on the dataset Psft using Code LLaMA 7B,
resulting in the training of the Frontal Lobe Model
0.
_FL_
Based on 0, we conducted active learning
_FL_
for k times, stopping when the performance of the
model FLi on the GSM8K test set no longer improves after iterations, to serve as our initial version of the model FL. We use the plans generated
by FLi for inference in the Parietal Lobe Model,
which will be introduced in §3.4, to evaluate the
performance of the Frontal Lobe Model.
**Direct Preference Optimization. We verify**
and optimize our current model’s ability to generate plans without using reinforcement learning, by
employing Directed Preference Optimization. We
use whether the logic of the plan aligns with the
question, whether steps in the plan are repeated,
and whether the plan omits key steps as the scoring
_′_
criteria R . The input xi includes the question and
the plan zi generated by the Frontal Lobe Model
_FL, and the manually annotated scores and rea-_
sons form a new Pair, which serves as our prompt
_′_
_C_, as shown in Figure 4.
_′_ _′_ _′_ _′_ _′_
_C_ = R Pair1 2 3[,] (4)
_⊕_ _[⊕]_ [Pair] _[⊕]_ [Pair]
After generating plans z through inference, we
use the GPT to generate scores s, which means how
well the plans align with the question:
_′_ _′_
_G(y|C_ _, x) = GFL(z|C, x)_ _·Ggpt(y|C_ _, x, z), (5)_
_score = E(y)._ (6)
_′_
We use prompt C on the model gpt-3.5-turbo_1106 to score the preferences of the generated plan_
datasets and provide reasons, extracting the scores
to obtain the preference dataset.
Then, we perform DPO on the initial version of
the frontal lobe model using the constructed preference dataset, resulting in the optimized Frontal
Lobe Model FL[∗].
**3.4** **Parietal Lobe Model**
In this subsection, we introduce a method for training the parietal lobe model, enabling it to better
generate high-quality code-form reasoning paths
from Plans, with the aim of achieving higher accuracy.
**Datasets. We use the model gpt-3.5-turbo-1106**
to perform inference on Dataset D, with the generated plans and questions as input and the code-form
reasoning paths and answers as output, to serve as
our SFT dataset Q0 for training the Parietal Lobe
Model. Otherwise, We performed 100 inference
samplings on the GSM8K train set using model
_PL and processed the obtained reasoning paths to_
remove duplicates and filter out incorrect reasoning paths, resulting in dataset Qi, where i means
iteration times.
**Supervised Fine-Tuning. We conducted super-**
vised fine-tuning on the obtained Dataset Q0 using
Code LLaMA 7B, resulting in the first version of
the Parietal Lobe Model, 0. Based on 0,
_PL_ _PL_
we conducted active learning for k times, stopping when the performance of model PLi on the
GSM8K test set no longer improves after iterations,
to serve as our initial version of the model PL.
-----
**4** **Experiments**
**4.1** **Experiment Settings**
**Models.** We use four ToRA language models:
ToRA-CODE 7B/13B/34B and ToRA 70B with
default parameters except a temperature of 0.9 in
100 sample times to create source reasoning paths
dataset D. We use two OpenAI language models:
_gpt-3.5-turbo-1106 and gpt-4-1106-preview with_
default parameters in sampling for creating plan
datasets and preference dataset. And We use Code
LLaMA 7B to train our model Brain.
**Datasets. We conducted SFT with 55K data for**
_FL, 90K data for PL, and 2.5K data for DPO. And_
we evaluated the models on GSM8K (Cobbe et al.,
2021).
**4.2** **Main Results & Analysis**
Table 1 shows the overall experiment results. We
mainly compare our approach with LLMs of size
7B, using zero-shot inference. Experimental results show that, aside from models trained on exceptionally strong baseline models like InternLM2Base and DeepSeekMath-Base, the performance of
the model trained with our proposed novel method
achieves SOTA performance in comparison with
Code LLaMA 7B based models, with 74% Accuracy.
**Language Models can extract plan explic-**
**itly. To explore the best training effects on Code**
LLaMA 7B using plan datasets generated from the
same source data, we adopted two methods to generate plans. For using Question and Code to generate Plan, we extracted the most frequently repeated
paths from the previously generated dataset P during the process of removing duplicate reasoning
_′_
paths to form dataset D . Using prompt C, we con_′_
ducted one sampling on both D and D on the large
model gpt-3.5-turbo-1106 to obtain plan datasets P
_′_
and P, respectively, and then performed inference
on model PL. For using Question and Solution to′′
generate Plan, using prompt C, we conducted one
sampling on the GSM8K train set with the model
_′′_
_gpt-3.5-turbo-1106 to obtain the plan dataset P_,
and similarly performed inference on model PL.
According to the experimental results in Table
2, it is revealed that the outputs of LLMs for mathematical reasoning tasks, whether in natural language, code, or formal language, all contain plans.
We were able to explicitly extract these plans using
different but similar prompts. Under the same data
scale, the performance of LLMs fluctuates around
**Model** **Accuracy**
_Closed-Source Models_
Minerva 58.8%
PaLM-2 80.7%
GPT-3.5-turbo 80.8%
GPT-4 92.0%
_Open-Source Models without Code_
LLaMA2 16.0 %
Llemma 36.4%
InternLM2-Base 36.5%
InternLM2-Math-Base 49.2%
MAmmoTH 53.6%
MetaMath 66.5%
DeepSeekMath-Base 64.2%
_Open-Source Models with Code_
Code LLaMA 20.8%
MAmmoTH-Coder 59.4%
MathCoder-CL 67.8%
ToRA-Code 72.6%
InternLM2-Math 78.1%
DeepSeekMath-RL 86.7%
**Brain** **74%**
Table 1: Main Results on GSM8K. Comparison for
Code LLaMA 7B based models with Zero-Shot.
70%, and filtering out incorrect data has almost no
impact.
At the same time, it can be understood that using correct answers to prompt GPT to generate
plans can maximize the automatic generation of
high-quality plan datasets under the same data
volume. However, expanding the dataset’s size
through 100 samplings enables the highest quality
of plan datasets generated from the same source
data.
**Dataset** **Wrong case** **Accuracy**
_P_ ✓ 73.7
_P_ 73.8
_′_
_P_ ✓ 69.7
_′_
_P_ 70.1
_′′_
_P_ ✓ 71.0
_′′_
_P_ 71.7
Table 2: Exploring the best training effects on Code
LLaMA 7B using plan datasets generated from the same
source data and compare for whether use wrong case
**Better plan, better performance.** We use
_′′_
prompt C with the model gpt-3.5-turbo-1106 as
our Frontal Lobe Model to generate plans on the
-----
**Method** **Accuracy**
_FLgpt3.5 + PL_ 69.7
_FLgpt4 + PL_ 71.5
Table 4: Investigating whether using GPT-4 to generate
plan datasets will affect model performance of complex
math reasoning tasks.
attempted to use Brain’s framework on GPT, resulting in a significant improvement. Our extensive
ablation experiments indicate that the outputs of
LLMs for mathematical reasoning tasks, whether
in natural language, code, or formal language, all
contain plans. By introducing a two-stage framework that breaks down complex reasoning tasks
into two steps, we train two models to simulate two
regions of the human brain, thereby using plans
to solve complex reasoning tasks. We used Direct
Preference Optimization, a simple method that optimizes strategy directly using preferences, allowing
for the extraction of its optimal policy in a closed
form without the need for a reinforcement learning
training loop. This approach generates high-quality
plans, aiding LLMs in producing more accurate
reasoning paths and answers.
Furthermore, we discovered that the degree of
alignment between the plan and the question positively correlates with the alignment between the
code and the plan. LLMs implicitly score the alignment of the plan with the question. If the score
is low, the model will not generate code based on
the plan but will instead regenerate the code. In
the future, we will explore how LLMs follow plans
to generate reasoning paths, to explain the error
correction abilities of LLMs.
GSM8K test set, and then use our trained parietal
lobe model PL to generate code and execute it to
obtain answers.
According to the experimental results in Table 3,
it is revealed that the two-stage framework of Brain,
which simulates the human approach to problemsolving, is effective, achieving improvements of
2.9% (69.0% → 71.9%) on Code LLaMA 7B and
1.0% (80.8% → 81.8%) on gpt-3.5-turbo-1106.
Moreover, the higher the quality of plans generated in the first stage, the higher the accuracy in
solving complex reasoning tasks, using the model
_gpt-3.5-turbo-1106 as the Frontal Lobe Model re-_
sulted in a 7.5% improvement compared to FL[∗].
Based on the case study in Appendix B, we find
that the accuracy of gpt-3.5-turbo-1106 and PL
in tasks of aligning codes with plans is as high as
90%.
**Method** **Accuracy**
_one-stage_
SFT 69.0%
GPT-3.5-turbo 80.8%
_Brain(two-stage)_
_FL + PL_ 71.9%
_FL[∗]_ + PL 72.9%
_FLall[∗]_ [+][ PL] 74.0%
_gpt-3.5-turbo-1106 + PL_ 80.4%
_gpt-3.5-turbo-1106 + gpt-3.5-turbo-1106_ 81.8%
Table 3: Investigation of how two-stage method and how
the quality plan affect model performance of complex
math reasoning tasks.
We conducted one inference sampling on the
GSM8K train set using the GPT models gpt-3.5_turbo-1106 and gpt-4-1106-preview, respectively._
The datasets obtained were then fine-tuned on Code
LLaMA 7B, training the models the first version of
the model FLgpt3.5 and FLgpt4. Inference was performed on the GSM8K test set with these models,
generating datasets Pgpt3.5 and Pgpt4 . Inference
was carried out on PL.
According to Table 4, although the performance
of gpt-4-1106-preview was slightly higher than that
of gpt-3.5-turbo-1106 by 1.6%, we chose to use
_gpt-3.5-turbo-1106 for the sake of consistency in_
the models used in our experiments and cost considerations, given the extensive reliance on GPT
for subsequent experiments.
**5** **Conclusion**
We present Brain, which currently shows competitive performance on the GSM8K dataset compared
to other open-source models. At the same time, we
-----
**Limitations**
The main limitation of this paper is that it does
not analyze additional methods to enhance performance on Brain, such as self-consistency and selfverification. Furthermore, the Brain framework
could also be applied to other open-source models.
We should also evaluate complex mathematical reasoning tasks on a broader range of tasks to make
the results more convincing.
**Ethical Statements**
We claim from these aspects of ethical risks:
1) Our work leverages the open-source model
CodeLLaMA and the GSM8K dataset. We have
strictly followed their licensing protocols to ensure
compliance with all usage terms and conditions.
2) We utilized GPT4 for translation and grammatical corrections in our manuscript. All generated content has been thoroughly reviewed and
revised by human authors to ensure it adheres to
ethical guidelines and maintains the integrity of our
research.
3) Our research involves the generation of plan
datasets and preference datasets using GPT. While
we have conducted sample checks to identify any
ethical issues and found none, we recognize the
limitations of this approach. It is not feasible to
guarantee that all outputs generated by GPT are
free from ethical risks.
**References**
Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, Eric Chu, Jonathan H. Clark, Laurent El
Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin
Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao,
Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez
Abrego, Junwhan Ahn, Jacob Austin, Paul Barham,
Jan Botha, James Bradbury, Siddhartha Brahma,
Kevin Brooks, Michele Catasta, Yong Cheng, Colin
Cherry, Christopher A. Choquette-Choo, Aakanksha
Chowdhery, Clément Crepy, Shachi Dave, Mostafa
Dehghani, Sunipa Dev, Jacob Devlin, Mark Díaz,
Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu
Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy GurAri, Steven Hand, Hadi Hashemi, Le Hou, Joshua
Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun,
Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li,
Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu,
Frederick Liu, Marcello Maggioni, Aroma Mahendru,
Joshua Maynez, Vedant Misra, Maysam Moussalem,
Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek,
Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif,
Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee
Shelby, Ambrose Slone, Daniel Smilkov, David R.
So, Daniel Sohn, Simon Tokumine, Dasha Valter,
Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang,
Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting
Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven
Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav
[Petrov, and Yonghui Wu. 2023. Palm 2 technical](http://arxiv.org/abs/2305.10403)
[report.](http://arxiv.org/abs/2305.10403)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems.](http://arxiv.org/abs/2110.14168)
Shumin Deng, Ningyu Zhang, Nay Oo, and Bryan Hooi.
[2023. Towards a unified view of answer calibration](http://arxiv.org/abs/2311.09101)
[for multi-step reasoning.](http://arxiv.org/abs/2311.09101)
Jiahui Gao, Renjie Pi, Jipeng Zhang, Jiacheng Ye, Wanjun Zhong, Yufei Wang, Lanqing Hong, Jianhua Han,
[Hang Xu, Zhenguo Li, and Lingpeng Kong. 2023. G-](http://arxiv.org/abs/2312.11370)
[llava: Solving geometric problem with multi-modal](http://arxiv.org/abs/2312.11370)
[large language model.](http://arxiv.org/abs/2312.11370)
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen,
Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu
[Chen. 2023. Tora: A tool-integrated reasoning agent](http://arxiv.org/abs/2309.17452)
[for mathematical problem solving.](http://arxiv.org/abs/2309.17452)
Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong,
Zhen Wang, Daisy Zhe Wang, and Zhiting Hu. 2023.
[Reasoning with language model is planning with](http://arxiv.org/abs/2305.14992)
[world model.](http://arxiv.org/abs/2305.14992)
Ruixin Hong, Hongming Zhang, Xinyu Pang, Dong Yu,
[and Changshui Zhang. 2023a. A closer look at the](http://arxiv.org/abs/2311.07954)
[self-verification abilities of large language models in](http://arxiv.org/abs/2311.07954)
[logical reasoning.](http://arxiv.org/abs/2311.07954)
Sirui Hong, Mingchen Zhuge, Jonathan Chen, Xiawu
Zheng, Yuheng Cheng, Ceyao Zhang, Jinlin Wang,
Zili Wang, Steven Ka Shing Yau, Zijuan Lin, Liyang
Zhou, Chenyu Ran, Lingfeng Xiao, Chenglin Wu,
[and Jürgen Schmidhuber. 2023b. Metagpt: Meta pro-](http://arxiv.org/abs/2308.00352)
[gramming for a multi-agent collaborative framework.](http://arxiv.org/abs/2308.00352)
Jie Huang, Xinyun Chen, Swaroop Mishra,
Huaixiu Steven Zheng, Adams Wei Yu, Xiny[ing Song, and Denny Zhou. 2023. Large language](http://arxiv.org/abs/2310.01798)
[models cannot self-correct reasoning yet.](http://arxiv.org/abs/2310.01798)
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu,
Kyle Richardson, Peter Clark, and Ashish Sabharwal.
[2023. Decomposed prompting: A modular approach](http://arxiv.org/abs/2210.02406)
[for solving complex tasks.](http://arxiv.org/abs/2210.02406)
-----
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
[Jian-Guang Lou, and Weizhu Chen. 2023. Making](http://arxiv.org/abs/2206.02336)
[large language models better reasoners with step-](http://arxiv.org/abs/2206.02336)
[aware verifier.](http://arxiv.org/abs/2206.02336)
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri
Edwards, Bowen Baker, Teddy Lee, Jan Leike, John
Schulman, Ilya Sutskever, and Karl Cobbe. 2023.
[Let’s verify step by step.](http://arxiv.org/abs/2305.20050)
Zhan Ling, Yunhao Fang, Xuanlin Li, Zhiao Huang,
Mingu Lee, Roland Memisevic, and Hao Su. 2023.
[Deductive verification of chain-of-thought reasoning.](http://arxiv.org/abs/2306.03872)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wiz-](http://arxiv.org/abs/2308.09583)
[ardmath.](http://arxiv.org/abs/2308.09583)
[OpenAI. 2023. Gpt-4 technical report.](https://doi.org/10.48550/arXiv.2303.08774)
Debjit Paul, Mete Ismayilzada, Maxime Peyrard, Beatriz Borges, Antoine Bosselut, Robert West, and Boi
[Faltings. 2023. Refiner: Reasoning feedback on in-](http://arxiv.org/abs/2304.01904)
[termediate representations.](http://arxiv.org/abs/2304.01904)
Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano
Ermon, Christopher D. Manning, and Chelsea Finn.
[2023. Direct preference optimization: Your language](http://arxiv.org/abs/2305.18290)
[model is secretly a reward model.](http://arxiv.org/abs/2305.18290)
Bernardino Romera-Paredes, Mohammadamin
Barekatain, Alexander Novikov, Matej Balog,
M. Pawan Kumar, Emilien Dupont, Francisco J. R.
Ruiz, Jordan S. Ellenberg, Pengming Wang, Omar
Fawzi, Pushmeet Kohli, and Alhussein Fawzi. 2023.
[Mathematical discoveries from program search with](https://doi.org/10.1038/s41586-023-06924-6)
[large language models. Nature.](https://doi.org/10.1038/s41586-023-06924-6)
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle,
Sten Sootla, Itai Gat, Xiaoqing Ellen Tan, Yossi
Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, Artyom
Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish
Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal
Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier,
[Thomas Scialom, and Gabriel Synnaeve. 2023. Code](http://arxiv.org/abs/2308.12950)
[llama: Open foundation models for code.](http://arxiv.org/abs/2308.12950)
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu,
Junxiao Song, Mingchuan Zhang, Y. K. Li, Y. Wu,
[and Daya Guo. 2024. Deepseekmath: Pushing the](http://arxiv.org/abs/2402.03300)
[limits of mathematical reasoning in open language](http://arxiv.org/abs/2402.03300)
[models.](http://arxiv.org/abs/2402.03300)
Haotian Sun, Yuchen Zhuang, Lingkai Kong, Bo Dai,
[and Chao Zhang. 2023. Adaplanner: Adaptive plan-](http://arxiv.org/abs/2305.16653)
[ning from feedback with language models.](http://arxiv.org/abs/2305.16653)
[Mirac Suzgun and Adam Tauman Kalai. 2024. Meta-](http://arxiv.org/abs/2401.12954)
[prompting: Enhancing language models with task-](http://arxiv.org/abs/2401.12954)
[agnostic scaffolding.](http://arxiv.org/abs/2401.12954)
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier
Martinet, Marie-Anne Lachaux, Timothée Lacroix,
Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal
Azhar, Aurelien Rodriguez, Armand Joulin, Edouard
[Grave, and Guillaume Lample. 2023. Llama: Open](http://arxiv.org/abs/2302.13971)
[and efficient foundation language models.](http://arxiv.org/abs/2302.13971)
Trieu H. Trinh, Yuhuai Wu, Quoc V. Le, He He,
and Thang Luong. 2024. [Solving olympiad ge-](https://doi.org/10.1038/s41586-023-06747-5)
[ometry without human demonstrations.](https://doi.org/10.1038/s41586-023-06747-5) _Nature,_
625(7995):476–482.
Ke Wang, Houxing Ren, Aojun Zhou, Zimu Lu, Sichun
Luo, Weikang Shi, Renrui Zhang, Linqi Song,
[Mingjie Zhan, and Hongsheng Li. 2023a. Mathcoder:](http://arxiv.org/abs/2310.03731)
[Seamless code integration in llms for enhanced math-](http://arxiv.org/abs/2310.03731)
[ematical reasoning.](http://arxiv.org/abs/2310.03731)
Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu,
Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim.
[2023b. Plan-and-solve prompting: Improving zero-](http://arxiv.org/abs/2305.04091)
[shot chain-of-thought reasoning by large language](http://arxiv.org/abs/2305.04091)
[models.](http://arxiv.org/abs/2305.04091)
Peiyi Wang, Lei Li, Zhihong Shao, R. X. Xu, Damai
Dai, Yifei Li, Deli Chen, Y. Wu, and Zhifang Sui.
[2023c. Math-shepherd: A label-free step-by-step](http://arxiv.org/abs/2312.08935)
[verifier for llms in mathematical reasoning.](http://arxiv.org/abs/2312.08935)
Zeqiu Wu, Yushi Hu, Weijia Shi, Nouha Dziri, Alane
Suhr, Prithviraj Ammanabrolu, Noah A. Smith, Mari
[Ostendorf, and Hannaneh Hajishirzi. 2023a. Fine-](http://arxiv.org/abs/2306.01693)
[grained human feedback gives better rewards for lan-](http://arxiv.org/abs/2306.01693)
[guage model training.](http://arxiv.org/abs/2306.01693)
[Zhenyu Wu, Meng Jiang, and Chao Shen. 2023b. Get](http://arxiv.org/abs/2312.06867)
[an a in math: Progressive rectification prompting.](http://arxiv.org/abs/2312.06867)
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran,
Thomas L. Griffiths, Yuan Cao, and Karthik
Narasimhan. 2023. [Tree of thoughts: Deliberate](http://arxiv.org/abs/2305.10601)
[problem solving with large language models.](http://arxiv.org/abs/2305.10601)
Fei Yu, Anningzhe Gao, and Benyou Wang. 2023a.
[Outcome-supervised verifiers for planning in mathe-](http://arxiv.org/abs/2311.09724)
[matical reasoning.](http://arxiv.org/abs/2311.09724)
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo
[Li, Adrian Weller, and Weiyang Liu. 2023b. Meta-](http://arxiv.org/abs/2309.12284)
[math: Bootstrap your own mathematical questions](http://arxiv.org/abs/2309.12284)
[for large language models.](http://arxiv.org/abs/2309.12284)
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting
Dong, Keming Lu, Chuanqi Tan, Chang Zhou, and
[Jingren Zhou. 2023. Scaling relationship on learning](http://arxiv.org/abs/2308.01825)
[mathematical reasoning with large language models.](http://arxiv.org/abs/2308.01825)
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao
Huang, Huan Sun, Yu Su, and Wenhu Chen. 2023.
[Mammoth: Building math generalist models through](http://arxiv.org/abs/2309.05653)
[hybrid instruction tuning.](http://arxiv.org/abs/2309.05653)
Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew
[Chi-Chih Yao. 2023. Cumulative reasoning with](http://arxiv.org/abs/2308.04371)
[large language models.](http://arxiv.org/abs/2308.04371)
-----
Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei
[Qin, and Lidong Bing. 2023. Verify-and-edit: A](https://doi.org/10.18653/v1/2023.acl-long.320)
[knowledge-enhanced chain-of-thought framework.](https://doi.org/10.18653/v1/2023.acl-long.320)
In Proceedings of the 61st Annual Meeting of the
_Association for Computational Linguistics (Volume_
_1: Long Papers), pages 5823–5840, Toronto, Canada._
Association for Computational Linguistics.
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo
[Li, and Yu Li. 2023. Progressive-hint prompting](http://arxiv.org/abs/2304.09797)
[improves reasoning in large language models.](http://arxiv.org/abs/2304.09797)
Xinyu Zhu, Junjie Wang, Lin Zhang, Yuxiang Zhang,
Yongfeng Huang, Ruyi Gan, Jiaxing Zhang, and Yu[jiu Yang. 2023a. Solving math word problems via](https://doi.org/10.18653/v1/2023.acl-long.245)
[cooperative reasoning induced language models. In](https://doi.org/10.18653/v1/2023.acl-long.245)
_Proceedings of the 61st Annual Meeting of the As-_
_sociation for Computational Linguistics (Volume 1:_
_Long Papers), pages 4471–4485, Toronto, Canada._
Association for Computational Linguistics.
Zhaocheng Zhu, Yuan Xue, Xinyun Chen, Denny Zhou,
Jian Tang, Dale Schuurmans, and Hanjun Dai. 2023b.
[Large language models can learn rules.](http://arxiv.org/abs/2310.07064)
-----
**A** **Experiment Details**
The entire training process of Brain on NVIDIA A100 40G GPUs took a total of 8 hours and we evaluating
GSM8K test set took an average of 3 minutes each time. Both SFT and active learning used a learning rate
of 2e-5 with a 3% warm-up period for 1 epoch and a global batch size of 128. For DPO, aside from the
learning rate being 2e-6, all other parameters were set to default. We trained all models with DeepSpeed
ZeRO Stage3 and Flash-Attention 2. Additionally, GSM8K has 7473 train set and 1319 test set. The
experimental results are based on just a single run.
**B** **Case Study**
|id|0 1 2 3 4 5 6 7 8 9|
|---|---|
|Plan Align Question(FL) Code Align Plan(PL) Code Align Plan(gpt-3.5-turbo-1106) score|1 1 0 1 1 1 1 0.5 0.5 1 1 1 0 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 T T F T T T T F F T|
**id** 0 1 2 3 4 5 6 7 8 9
Plan Align Question(FL) 1 1 0 1 1 1 1 0.5 0.5 1
Code Align Plan(PL) 1 1 0 1 1 1 1 1 1 1
Code Align Plan(gpt-3.5-turbo-1106) 1 1 0 1 1 1 1 1 1 1
score T T F T T T T F F T
Table 5: For the Plan Align Question, we score the 10 sampled examples based on the scoring rules for plans in the
prompt, with the score being the average score for each step. For Code Align Plan, the scoring is based on the extent
to which the generated code completely conforms to the steps, with the score being the average score for each step.
**C** **Prompt**
_′′_
**The few-shot prompt C** **for GSM8K high-quality plan creating.**
**R: Let’s write a plan based on the solution for solving math problems.**
**Question:**
Richard lives in an apartment building with 15 floors. Each floor contains 8 units, and 3/4 of
the building is occupied. What’s the total number of unoccupied units In the building?
**Solution:**
1. The total number of units in the building will be 8 units/floor * 15 floors = «8*15=120»120
units.
2. If 3/4 of the building is occupied, then the total number of occupied units is 3/4 * 120 units =
«3/4*120=90»90 units.
3. The total number of unoccupied units is 120 units - 90 units = «120-90=30»30 units.
**Plan:**
To solve the problem follow these steps:
1. Determine the total number of units in the building. We can multiply the number of units per
floor by the total number of floors.
2. Determine the number of occupied units. We can multiply the total number of units by the
fraction representing the occupied units.
3. Determine the number of unoccupied units. We can subtract the number of occupied units
from the total number of units.
...
_′′_
Figure 5: The few-shot prompt C for GSM8K high-quality plan creating. The black color text is the requirement
_R. The teal and blue color text is the one of example pairs, which contains input x1 including example question and_
solution and the output y1 including example plan.
-----
| [
"Yezeng, Chen",
"Zui, Chen",
"Yi, Zhou"
] | 2024-02-23T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2403.00800 | https://arxiv.org/abs/2403.00800 | https://www.semanticscholar.org/paper/09a516b19897e20860dcde8fea20da3bd867d356 |
CMM-Math: A Chinese Multimodal Math Dataset To Evaluate and Enhance the Mathematics Reasoning of Large Multimodal Models | Large language models (LLMs) have obtained promising results in mathematical reasoning, which is a foundational skill for human intelligence. Most previous studies focus on improving and measuring the performance of LLMs based on textual math reasoning datasets (e.g., MATH, GSM8K). Recently, a few researchers have released English multimodal math datasets (e.g., MATHVISTA and MATH-V) to evaluate the effectiveness of large multimodal models (LMMs). In this paper, we release a Chinese multimodal math (CMM-Math) dataset, including benchmark and training parts, to evaluate and enhance the mathematical reasoning of LMMs. CMM-Math contains over 28,000 high-quality samples, featuring a variety of problem types (e.g., multiple-choice, fill-in-the-blank, and so on) with detailed solutions across 12 grade levels from elementary to high school in China. Specifically, the visual context may be present in the questions or opinions, which makes this dataset more challenging. Through comprehensive analysis, we discover that state-of-the-art LMMs on the CMM-Math dataset face challenges, emphasizing the necessity for further improvements in LMM development. We also propose a Multimodal Mathematical LMM (Math-LMM) to handle the problems with mixed input of multiple images and text segments. We train our model using three stages, including foundational pre-training, foundational fine-tuning, and mathematical fine-tuning. The extensive experiments indicate that our model effectively improves math reasoning performance by comparing it with the SOTA LMMs over three multimodal mathematical datasets. | This paper releases a Chinese multimodal math dataset, including benchmark and training parts, to evaluate and enhance the mathematical reasoning of LMMs, and proposes a Multimodal Mathematical LMM (Math-LMM) to handle the problems with mixed input of multiple images and text segments. | [
"Wentao, Liu",
"Qianjun, Pan",
"Jie, Zhou",
"Zhuo, Liu",
"Ji, Wu",
"Qin, Chen",
"Bo, Jiang",
"Aimin, Zhou",
"Liang, He",
"Yi, Zhang"
] | 2024-09-04T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2409.02834v1 | https://arxiv.org/abs/2409.02834 | https://www.semanticscholar.org/paper/344a9642b76c83143289db8f9fc535e74afd0421 |
|
CNN models' sensitivity to numerosity concepts | The nature of number is a classic question in the philosophy of mathematics. Cognitive scientists have shown that numbers are mentally represented as magnitudes organized as a mental number line (MNL). Here we ask whether CNN models, in learning to classify images, also learn about number and numerosity ‘for free’. This was the case. A representative model showed the distance, size, and ratio effects that are the signatures of magnitude representations in humans. An MDS analysis of their latent representations found a close resemblance to the MNL documented in people. These findings challenge the developmental science proposal that numbers are part of the ‘core knowledge’ that all human infants possess, and instead serve as an existence proof of the learnability of numerical concepts. | null | # CNN models’ sensitivity to numerosity concepts
**Neha Upadhyay**
College of Computing
Georgia Institute of Technology
Atlanta, GA 30332
```
[email protected]
```
**Sashank Varma**
College of Computing
Georgia Institute of Technology
Atlanta, GA 30332
```
[email protected]
```
**Abstract**
The nature of number is a classic question in the philosophy of mathematics. Cognitive scientists have shown that numbers are mentally represented as magnitudes
organized as a mental number line (MNL). Here we ask whether CNN models, in
learning to classify images, also learn about number and numerosity ‘for free’. This
was the case. A representative model showed the distance, size, and ratio effects
that are the signatures of magnitude representations in humans. An MDS analysis
of their latent representations found a close resemblance to the MNL documented in
people. These findings challenge the developmental science proposal that numbers
are part of the ‘core knowledge’ that all human infants possess, and instead serve
as an existence proof of the learnability of numerical concepts.
**1** **Introduction**
The past 10 years have seen great progress in computer vision models. Recent research is exploring
the alignment of these models to behavioral and brain imaging data on human cognition Cichy and
Kaiser [2019]. Here, we investigate the sensitivity of CNNs to number.
The nature of number is a classic question in the philosophy of mathematics dating back to Plato’s
dialogue Meno. The classic cognitive science finding is that numbers are represented in the mind as
magnitudes akin to the sensory representations of physical quantities Moyer and Landauer [1967].
These magnitude representations are in turn organized as a mental number line (MNL; Figure 1).
Figure 1: Mental number line representation in (left) human minds and (right) VGG19.
What is the origin of the MNL? A prominent position in developmental science is that magnitude
representations of number are part of the ‘core knowledge’ that all human infants possess Spelke
and Kinzler [2007]. An alternative hypothesis is that these representations are learned ‘for free’ as
intelligent systems learn to perceive and navigate their environment. Here, we evaluate this latter
hypothesis using CNNs. We propose that these models learn number representations as a ‘side effect’
of learning to classify images. We evaluate this proposal in experiments that constitute an existence
proof of the learnability of numerical concepts without the need to posit core knowledge of number.
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Workshop on MATH-AI.
-----
**1.1** **Cognitive science evidence that numbers have magnitude representations**
That numbers have magnitude representations is supported by multiple experimental findings. We
focus on three important findings that have been documented using the number comparison paradigm.
In this paradigm, people see a pair of digits (e.g., 1 vs. 3) or numerosities (i.e., sets of objects like ‘o’
vs. ‘o o o’). The distance effect is the finding that the time to compare two numbers x and y decreases
as the distance |x − _y| between them increases Moyer and Landauer [1967]. This is consistent with_
the following process model: fixate x and y on the MNL and discriminate which one is ‘to the right’.
The farther apart the two numbers, the easier the discrimination.
The size effect is the finding that the time to compare two numbers x and y increases as their average
size (x − _y)/2 increases Parkman [1971]. This suggests that the scaling (i.e., the distance between_
adjacent numbers) of the MNL is not fixed, as in the conventional number line of mathematics, but
rather is psychophysically compressed, as it is for perceptual quantities; see Figure 1. Thus, for
example, people are faster to discriminate which of 1 vs. 3 is ’to the right’ than 7 vs. 9.
The ratio effect combines the distance and size effects: it is the finding that the time to compare two
numbers x and y decreases as the ratio of the greater number over the smaller number increases,
and this decrease is according to a nonlinear psychophysical function Halberda et al. [2008]. The
presence of this effect is considered very strong evidence for magnitude representations of number.
**1.2** **Numerical sensitivities of computer vision models**
Recent research has explored the mathematical capabilities of ML models. Much of this work has
focused on the ability of NLP mdels to solve arithmetic, algebra, trigonometric, and calculus problems
Welleck et al. [2022] and also to generate proofs in higher-level mathematics (i.e., discrete math,
probability, linear algebra, abstract algebra, real analysis, topology) Davies et al. [2021].
Less attention has been paid to the mathematical capabilities of computer vision models. This is
perhaps because they are a poor fit for symbolic mathematics. Early research explored pre-CNN
models trained on artificially generated numerosity stimuli Stoianov and Zorzi [2012] Zorzi and
Testolin [2018] found that such models showed the ratio effect; further analysis of the hidden layers
found units tuned to specific numerosities. Subsequent work generalized these findings to networks
trained on numerosity images abstracted from naturally occurring images Testolin et al. [2020].
Other researchers have evaluated the number representations of CNN models trained on ImageNet
Kim et al. [2021] Nasr et al. [2019]. They have found hidden layer units tuned to specific numerosities.
When these representations are used as inputs to a separate network trained to decide whether two
images are of the same numeoristy or not, that network shows the distance and size effects.
With respect to the MNL, it has recently been shown that a vision transformer model trained on
artificially generated numerosity stimuli learns a latent MNL representation Boccato et al. [2021].
**1.3** **Research Questions**
We investigate whether CNNs learn magnitude representations of number. We present numerosities
to the models in a sequence of experiments differing in which incidental visual features are controlled
and which are allowed to vary freely (and potentially correlate with number):
1. The items of the two numerosities are circles. The total area of the numerosities is equated
to rule out this visual feature as the basis of comparison.
2. Like (1) but the total circumference is equated to rule out this visual feature.
3. Like (2) but the items of the two numerosities are different (e.g., two of the three of circles,
squares, triangles) to generalize the findings across shapes.
4. Like (3) but the total area of each numerosity is different to generalize the findings across
both shapes and area.
5. Like (4) but the items of each numerosity are random shapes of random area to further
generalize the findings.
6. Naturally occurring numerosities found through Google Images that differ on many visual
attributes (e.g., shape, size, drawing style, color, etc.) to ensure further generalization.
We evaluate whether the models show the signatures of magnitude representations: the distance, size,
and ratio effects. We also explore whether the models learn a latent representation of the MNL.
-----
**2** **Methods**
**2.1** **Models**
We used the pre-trained VGG19 Simonyan and Zisserman [2015] model from the PyTorch model zoo
as our primary model. We evaluated the generalizability of our findings by replicating all analyses
with the pre-trained Alexnet, Googlenet, Densenet, and Resnet18 models.
**2.2** **Numerosity stimuli**
The stimuli for the six experiments are as described in the six research questions above; see Figure 2
for examples. They spanned numerosities from 1 to 9.
Figure 2: For the 6 experiments, sample stimuli for 3 numerosities.
For the Experiments 1-5, the stimuli were solid black shapes randomly placed on a white background
of 720 × 720 pixels generated using matplotlib. For Experiment 1, the items were circles and the
total area of each comparison was equated. Specifically, for each of 5 total areas, we generated 4
stimuli for each of the numerosities 1-9. The total area of a stimulus varied from A1 = .02% to
_A5 = .10% of all pixels in increments of .02%. For Experiment 2, the total circumference of a_
comparison was equated. The stimuli were generated similarly to Experiment 1 except that the total
circumference of a stimulus was varied from C1 = 100 to C5 = 300 pixels in increments of 50
pixels. Experiment 3 was like Experiment 1 except the items of the two stimuli were different, e.g.,
one might be circles and the other either squares or triangles. Experiment 4 was like Experiment 3
except the total areas of the two stimuli varied randomly. Experiment 5 was like Experiment 4 except
that the items of each stimulus varied randomly both in shape and in area. For Experiment 6, we
automatically collected 80-100 images from Google Images for each numerosity 1-9. We stripped
the background and manually verified each image. We retained a subset of 40 images per numerosity
that clearly showed the target quantity.
**2.3** **Generating Model predictions**
For each experiment, we normalized and resized each stimulus image. For two unequal numerosities
each in the range 1 − 9, there are (9 × 8)/2 = 36 possible comparisons. Denote the numerosities
of a comparison as n1 and n2. We randomly selected stimuli of numerosity n1 and n2, presented
each to the CNN, captured their respective vector representations on the final fully connected layer,
and computed the cosine similarity of the two vectors. We repeated this process M = 20 times for
the first three experiments, M = 40 times for Experiment 4, and M = 60 times for Experiments 5
and 6, increasing M with increasing noise in the stimulus. Finally, we computed the average cosine
similarity when comparing n1 and n2.
We used these values to estimate the three effects. For the distance effect, we plotted the average
cosine similarity (y) at each distance |n1 − _n2| (x) and computed the correlation between these two_
-----
variables. A distance effect is signalled by a negative correlation close to r = −1. We repeated this
process for the size effect, with the x variable the size |n1 − _n2|/2 of the comparison; here, the_
expectation was for a positive correlation close to r = 1. We also repeated this process for the ratio
effect, with the x variable the ratio max(n1, n2)/min(n1, n2) of the comparison. Here, we fit a
negative exponential function to model the results; the expectation was for an R[2] value close to 1.
**3** **Results**
Figure 3: For the 6 experiments, the distance, size, and ratio effects.
Figure 4 presents graphs for all VGG19 experiments showing the distance, size, and ratio effects. The
panels correspond to the sample stimuli displayed in Figure 2.
As the graphs show, VGG19 generally displayed the distance, size, and ratio effects across all
experiments. Two patterns are worth noting. First, the model’s account of the size effect is weaker
than its account of the other two effects. This is particularly true of Experiments 2 and 4. That said, it
is the distance effect that is the key finding for a MNL, and the ratio effect subsumes both the distance
and size effects. Second, the model’s distance and ratio effects are weakest for Experiment 6. These
stimuli were images returned from a Google Image search, and they naturally varied on many more
incidental visual features than the stimuli of the other experiments. The reduced fits may signal the
limits of the model’s representation of numerosity.
Finally, we estimated the latent MNL representation of VGG19 by organizing the pairwise average
cosine similarities of the numerosities 1 − 9 in Experiment 1 in a matrix, submitting this to MDS,
and requesting a 1D solution; see Figure 1. The Stress-I value was 0.008, indicating that a continuum
offers a good of these similarities. The representation resembles that of the psychophysically
compressed MNL, with the major distortion being the displacement of ’2’.
**4** **Discussion**
The current study investigated whether CNN models, in learning to classify images, learn about
number and numerosity ‘for free’. This was the case: VGG19 showed the distance, size, and ratio
effects that are the signature of the magnitude representations in humans. This was true across five
experiments using artificially generated stimuli and a sixth using naturally occurring numerosities
found via Google Images. In addition, an MDS analysis of the latent representation of the Experiment
1 stimuli found a close resemblance to the psychophysically scaled MNL documented in people.
That CNNs trained on ImageNet learn magnitude representations of number is at odds with the
_core knowledge proposal, which states that the MNL is part of biological endowment of the child_
Spelke and Kinzler [2007]. This finding is more consistent with the emergentist perspective, which
-----
claims that this representation arises from the interplay between neural architecture constraints,
domain-general learning mechanisms, and structured environments Zorzi and Testolin [2018].
In ongoing work, we are identifying the earliest layer of CNNs where magnitude representations first
manifest as evidenced by distance, size, and ratio effects and by a latent MNL. We are also exploring
the numerical sensitivities of state-of the art vision transformer models Boccato et al. [2021].
The current research sets the stage for investigations of the development of magnitude representations.
The developmental progressions of the distance, size, and ratio effects have been documented in
children Halberda et al. [2008] Sekuler and Mierkiewicz [1977]. An interesting question is whether
computer vision models show the same progressions in their number representations over training.
**References**
Tommaso Boccato, Alberto Testolin, and Marco Zorzi. Learning numerosity representations with
transformers: Number generation tasks and out-of-distribution generalization. Entropy, 23(7):857,
2021.
Radoslaw M Cichy and Daniel Kaiser. Deep neural networks as scientific models. Trends in cognitive
_sciences, 23(4):305–317, 2019._
Alex Davies, Petar Veliˇckovi´c, Lars Buesing, Sam Blackwell, Daniel Zheng, Nenad Tomašev,
Richard Tanburn, Peter Battaglia, Charles Blundell, András Juhász, et al. Advancing mathematics
by guiding human intuition with ai. Nature, 600(7887):70–74, 2021.
Justin Halberda, Michèle MM Mazzocco, and Lisa Feigenson. Individual differences in non-verbal
number acuity correlate with maths achievement. Nature, 455(7213):665–668, 2008.
Gwangsu Kim, Jaeson Jang, Seungdae Baek, Min Song, and Se-Bum Paik. Visual number sense in
untrained deep neural networks. Science advances, 7(1):eabd6127, 2021.
Robert S Moyer and Thomas K Landauer. Time required for judgements of numerical inequality.
_Nature, 215(5109):1519–1520, 1967._
Khaled Nasr, Pooja Viswanathan, and Andreas Nieder. Number detectors spontaneously emerge in
a deep neural network designed for visual object recognition. Science advances, 5(5):eaav7903,
2019.
John M Parkman. Temporal aspects of digit and letter inequality judgments. Journal of experimental
_psychology, 91(2):191, 1971._
Robert Sekuler and Diane Mierkiewicz. Children’s judgments of numerical inequality. Child
_Development, pages 630–633, 1977._
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image
recognition. In International Conference on Learning Representations, 2015.
Elizabeth S Spelke and Katherine D Kinzler. Core knowledge. Developmental science, 10(1):89–96,
2007.
Ivilin Stoianov and Marco Zorzi. Emergence of a’visual number sense’in hierarchical generative
models. Nature neuroscience, 15(2):194–196, 2012.
Alberto Testolin, Will Y Zou, and James L McClelland. Numerosity discrimination in deep neural
networks: Initial competence, developmental refinement and experience statistics. Developmental
_science, 23(5):e12940, 2020._
Sean Welleck, Peter West, Jize Cao, and Yejin Choi. Symbolic brittleness in sequence models: on
systematic generalization in symbolic mathematics. In Proceedings of the AAAI Conference on
_Artificial Intelligence, volume 36, pages 8629–8637, 2022._
Marco Zorzi and Alberto Testolin. An emergentist perspective on the origin of number sense.
_Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1740):20170043,_
2018.
-----
| [
"Neha, Upadhyay",
"Sashank, Varma"
] | 2023-10-28T00:00:00 | null | false | 0 | 0 | null | https://openreview.net/forum?id=BENuWCJuTU | null | null |
COMET: "Cone of experience" enhanced large multimodal model for mathematical problem generation | The automatic generation of high-quality mathematical problems is practically valuable in many educational scenarios. Large multimodal model provides a novel technical approach for the mathematical problem generation because of its wide success in cross-modal data scenarios. However, the traditional method of separating problem solving from problem generation and the mainstream fine-tuning framework of monotonous data structure with homogeneous training objectives limit the application of large multimodal model in mathematical problem generation. Addressing these challenges, this paper proposes COMET, a "Cone of Experience" enhanced large multimodal model for mathematical problem generation. Firstly, from the perspective of mutual ability promotion and application logic, we unify stem generation and problem solving into mathematical problem generation. Secondly, a three-stage fine-turning framework guided by the "Cone of Experience" is proposed. The framework divides the fine-tuning data into symbolic experience, iconic experience, and direct experience to draw parallels with experiences in the career growth of teachers. Several fine-grained data construction and injection methods are designed in this framework. Finally, we construct a Chinese multimodal mathematical problem dataset to fill the vacancy of Chinese multimodal data in this field. Combined with objective and subjective indicators, experiments on multiple datasets fully verify the effectiveness of the proposed framework and model. | A three-stage fine-turning framework guided by the "Cone of Experience" is proposed, which divides the fine-tuning data into symbolic experience, iconic experience, and direct experience to draw parallels with experiences in the career growth of teachers. | ### COMET : “Cone of experience” enhanced large multimodal model for mathematical problem generation
**Sannyuya Liu[1][,][2], Jintian Feng[1][,][2], Zongkai Yang[1][,][2], Yawei Luo[3],**
**Qian Wan[1][,][2][∗], Xiaoxuan Shen[1][,][2][∗], Jianwen Sun[1][,][2][∗]**
1National Engineering Research Center of Educational Big Data, Central China Normal University, Wuhan, China
2Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan, China
3School of Software Technology, Zhejiang University, Hangzhou, China
{liusy027, zkyang027, wanq8228, shenxiaoxuan, sunjw}@ccnu.edu.cn
[email protected], [email protected]
**Abstract**
**1** **Introduction**
As a vital driving force leading the revolution of
technological and industrial development, generative artificial intelligence (GenAI) is restructuring various industries. For education, the impact
of GenAI is unprecedented (Wang et al., 2023a).
The Large Language Model (LLM), as one of the
most representative technologies of GenAI, displays excellent capabilities in text generation and
processing (Achiam et al., 2023; Ouyang et al.,
2022; Sun et al., 2023). The Large Multimodal
Model (LMM) further expands the data boundaries
of the LLM and has achieved widespread success in
cross-modal tasks (including image captioning and
visual question answering) (Liu et al., 2024a; Li
et al., 2023; Chen et al., 2023). The integration and
application of LMM has become a key approach
to promote the digital transformation of education,
since most of the teaching resources and records in
the educational scenario are multimodal data.
In recent years, many researchers have been
exploring the possibilities of combining LMM
with education, such as assisted writing(Liu et al.,
2024b) and emotional support(Lissak et al., 2024).
However, there is still a lack of relevant research in
the generation of educational resources, especially
in the field of mathematical problem generation.
The shortage of high-quality educational resources
is one of the main contradictions in the digitization of education. As shown in Figure 1, a highquality mathematical problem needs to be carefully
designed by domain experts and meet multiple requirements. First of all, completeness. During the
teaching process, the mathematical problem is for
teachers, students, and parents concurrently. Therefore it should contain four parts: mind of design,
The automatic generation of high-quality mathematical problems is practically valuable in
many educational scenarios. Large multimodal
model provides a novel technical approach for
the mathematical problem generation because
of its wide success in cross-modal data scenarios. However, the traditional method of separating problem solving from problem generation
and the mainstream fine-tuning framework of
monotonous data structure with homogeneous
training objectives limit the application of large
multimodal model in mathematical problem
generation. Addressing these challenges, this
paper proposes COMET, a “Cone of Experience” enhanced large multimodal model for
mathematical problem generation. Firstly, from
the perspective of mutual ability promotion
and application logic, we unify stem generation and problem solving into mathematical
problem generation. Secondly, a three-stage
fine-turning framework guided by the “Cone
of Experience” is proposed. The framework
divides the fine-tuning data into symbolic experience, iconic experience, and direct experience to draw parallels with experiences in the
career growth of teachers. Several fine-grained
data construction and injection methods are designed in this framework. Finally, we construct
a Chinese multimodal mathematical problem
dataset to fill the vacancy of Chinese multimodal data in this field. Combined with objective and subjective indicators, experiments on
multiple datasets fully verify the effectiveness
of the proposed framework and model.
**keywords: Mathematical Problem Generation,**
Cone Methodology, Large Multimodal Model, Educational Large Model, Smart Education
*Qian Wan, Xiaoxuan Shen, and Jianwen Sun are the
corresponding authors.
-----
stem, mind of solution, and answer, all with fluent
language and correct logic. Secondly, precision.
The mathematical problem should accurately reflect the objectives of the curriculum, be highly
related to given knowledge points, and provide the
function of exercises and tests. Lastly, differentiation. For certain key knowledge points under
investigation, the mathematical problem should differentiate in theme, problem type, difficulty level,
etc., to better serve complex and diverse learning
needs.
In summary, constructing high-quality mathematical problems requires the ability to generate
both stems and solutions to form a complete closed
loop. Traditionally, the studies of mathematical
problem generation are divided into two independent subfields, namely stem generation(Polozov
et al., 2015) (some works simply record as problem
generation) and problem solving(Kushman et al.,
2014). These studies mostly design rules or deep
neural networks to achieve reasoning, but are ineffective due to limitations in feature engineering
and model capabilities, and the research paradigm
that separates stem generation and problem solving does not meet the application requirement in
educational scenarios. LLM, which provides a
novel approach for mathematical problem generation, can not only generate coherent and logical
replies against cross-modality data, but also respond to diverse demands because of its ability
to in-context learning and instruction following.
However, there are still challenges when directly
applying the existing LMM to mathematical problem generation. Firstly, current work mostly focuses only on enhancing one aspect of the abilities
of LLM in stem generation or problem solving,
with little research proposing methods to simultaneously enhance both aspects of the model on the
scale of multimodality. Secondly, general LMMs
have learned abundant general concepts from a massive amount of pre-training data, but lack specialized knowledge needed for mathematical problem
generation. Thirdly, implementing domain finetuning based on general LMM is currently the basic
paradigm to execute domain task transfer. Previous data structure, as well as construction methods,
are simple, and the training objectives are not diverse enough, thus it’s hard to fully adapt to the
application requirements of the target domain.
To address the above issues, this paper proposes a “Cone of Experience” enhanced large multi
modal model for mathematical problem generation
(COMET). Firstly, stem generation and problem
solving are unified into mathematical problem generation tasks. Intuitively, the professional knowledge and practical experience required for stem
generation and problem solving share commonalities. Integrating the two abilities into a single
model can benefit the promotion of each other, and
is more practically logical in educational scenarios. Secondly, inspired by the “Cone of Experience”
theory proposed by American educator Edgar Dale
(Dale, 1947), we propose a three-stage fine-turning
framework. The “Cone of Experience” divides
human learning experience into three layers: symbolic experience, iconic experience, and direct experience. The experiences of different layers are
interconnected and only by fully integrating all
three layers can high-quality learning be achieved.
From the perspective of Data-centric AI (DCAI)
(Zha et al., 2023), we believe that the depth and
breadth of transfer training are key to domain transfer. Accordingly, for the specific task of mathematical problem generation, we design multiple
fine-grained data production methods for the three
types of experiences, establish multi-level experience data injection methods, and form a complete
fine-turning framework. Finally, a Chinese multimodal mathematical problem dataset (CMM12K)
is formulated, filling the gap in Chinese multimodal
corpus in this field. The effectiveness of the framework and model is comprehensively validated with
both objective and subjective indicators on multiple
datasets.
The main contributions of this paper can be summarized as follows:
- From the perspective of DCAI, we propose
COMET, a “Cone of Experience” enhanced
large multimodal model for mathematical
problem generation. To the best of our knowledge, this is the first work to systematically
enhance mathematical problem generation on
a single LMM.
- The formal definition of a three-stage finetuning framework based on the “Cone of Experience” is provided, together with the data
flow production methods for symbolic experience, iconic experience, and direct experience.
The corresponding knowledge infusion methods are empirically demonstrated.
- A Chinese multimodal mathematical problem
-----
Figure 1: The diagram of mathematical problem generation and the “Cone of Experience” guided model fine-tuning.
dataset (CMM12K) is built, which includes 4
types of problems and 12, 000 samples. This
work fills the gap in the field of Chinese multimodal corpus and provides a high-quality
benchmark for subsequent research.
- A large number of experiments have been carried out on multiple datasets, and the advancement and effectiveness of the proposed framework and model have been verified through
qualitative and quantitative analysis.
**2** **Related Work**
As mentioned above, the complete closed-loop of
mathematical problem generation involves two dimensions of stem generation (previous works simply record as problem generation) and problem
solving. This section introduces related work from
the perspective of technological development.
Early studies focus on the design of generation
rules and reason templates through summarizing
the characteristics and patterns of mathematical
problems(Singh et al., 2012). These works accomplish stem generation or mathematical reasoning by
combining the concepts, formulas, and theorems
(Polozov et al., 2015). Nandhini et al. (Nandhini
and Balasundaram, 2011) proposed two stem generation methods based on templates and context-free
grammar, generating stem with more diversified
structures and semantics. Moura et al. (De Moura
et al., 2015) built a knowledge base with a large
number of mathematical theorems embedded, providing interactive theorem proving based on rule
reasoning. These methods have a certain degree of
controllability and a high accuracy rate, but lack
problem adaptability and creativity.
With the development of machine learning, methods such as decision trees and support vector machine have been widely used to address mathematical stem generation or reasoning, recognizing the
structure and patterns of problems by models which
are trained based on large amounts of labeled data.
Heilman et al. (Heilman, 2011) used a syntactic
parser to convert input text into tree representations
and designed templates to achieve automatic transformation of problem forms. Roy et al. (Roy et al.,
2015; Roy and Roth, 2016) mapped unstructured
text to a more easily reasoned representation space
to eliminate text ambiguity, which can reason multistep arithmetic problems. Deep learning further
provides more powerful models, including seq2seq
model, attention mechanism, graph network, etc.,
providing new approaches for the generation and
reasoning of a wide range of mathematical problems such as arithmetic, algebra, and geometry (Wu
et al., 2022; Liu et al., 2019; Cao et al., 2021; Wang
et al., 2018; Chen et al., 2021). Zhou et al. (Zhou
and Huang, 2019) first proposed a seq2seq model
based on the attention mechanism, which generated the stem of applied problems given equations
and mathematical topics, significantly improving
the quality and diversity of generation. Wang et al.
(Wang et al., 2017) used recurrent neural network
to transform math word problems into equations
and based on similarity retrieval to improve reasoning performance. Group-ATT (Li et al., 2019)
applied multi-head attention to extract the global
feature, numerical feature, and quantity pair feature
of math word problems, achieving significant performance improvement on the Math23K dataset.
The mathematical problem generation has entered a new stage because of the sharply developed
and applied LLM (Christ et al., 2024; Drori et al.,
2022; Yue et al., 2023; Zhou et al., 2023), since
by training on large amounts of corpus LLMs can
understand complex language structures including
mathematical problems. In terms of stem generation, Droria et al. (Drori et al., 2022) use OpenAI
Codex (LLM of code data fine-tuning) to approach
human-level in generating college-level mathemat
-----
ical stems. Zong et al. (Zong and Krishnamachari,
2023) based on the few-shot learning prompt GPT3 to achieve the generation of related topics. In
terms of problem solving, WizardMath (Luo et al.,
2023) proposed a reinforcement learning from the
evol-instruct feedback method to construct more
complex instruction datasets, enhancing the mathematical reasoning ability of LLaMA-2. MathGLM
(Yang et al., 2023) demonstrates proficient multidigit arithmetic ability with only 2B parameters.
MathPrompter (Imani et al., 2023) increases the
credibility of the output result by generating multiple algebraic expressions or Python functions to
solve the same problem based on zero-shot chain of
thought. ToRA (Gou et al., 2023) integrates computational libraries and symbolic solvers to solve
complex mathematical problems.
**3** **Methodology**
Figure 2 is a schematic diagram of the three-stage
fine-tuning framework. The entire fine-tuning process is guided by the “Cone of Experience”, injecting symbolic experience, iconic experience, and direct experience. This section first defines the global
fine-tuning goals and notations, decomposing the
application requirements of the target domain into
three subtasks for reinforcement. Then, the threestage fine-tuning framework is expanded according
to the type of injected experience, elaborating on
the definitions, construction methods, and training
methods.
**3.1** **Problem Formulation**
To effectively apply LMM in teaching scenarios,
this work mainly enhances three capabilities of
LMM during the domain fine-tuning process: controllable generation (CG), analogy generation (AG),
and fine-grained solving (FS) for mathematical
problems. Both CG and AG reflect the ability of
LMM to generate problems, the difference being
that the former generates the mind of design and
original problem according to given requirements
(such as problem type, knowledge point, difficulty
level, etc.), while the latter understands and transforms the seed problems (such as changing topic
and type, expanding knowledge point or adjusting difficulty level). The FS reflects the problemsolving capacity of LMM, emphasizing the importance of producing detailed solution steps similar
to textbook references.
For LMM, the instructions for the above three
tasks can be formally defined as follows:
1. Given the problem type t, knowledge point c,
difficulty-level d and grade level g, the CG
prompt is constructed as qc = Fc(t, c, d, g).
2. Given the seed problem s ∈ _S, the AG prompt_
is constructed as qa = Fa(s).
3. Suppose a math problem is p, the prompt identifier of FS is qs = Fs(p).
The Fc, Fa, Fs can be flexibly designed according to the scene, and the settings in this work can
be seen in Section 3.4. Please note that in this paper, x represents a vector or a string, x represents a
scalar or a single character, X represents a set, and
**_X represents a function._**
The task requirements are defined as qin
_∈_
**_qc, qa, qs_** . This work can be defined as perform_{_ _}_
ing three-stage fine-tuning based on the general
LMM Flmm[(0)] [, combined with the “Cone of Experi-]
ence” theory, to obtain an LMM Flmm[(3)] [that meets]
the application requirements of mathematical problem generation in the teaching scene, so as to maximize the following conditional probability:
**_Plmm(m_** **_qin; θ[(3)]) =_**
_|_
_Nm_
_k=1_ **_Plmm(wk[m][|][q][in][ ⊕]_** **_[w]<k[m]_** [;][ θ][(3)][)][.] (1)
Y
**_Plmm(a_** **_qin, m; θ[(3)]) =_**
_|_
_Na_
**_Plmm(wk[a][|][q][in][ ⊕]_** **_[m][ ⊕]_** **_[w]<k[a]_** [;][ θ][(3)][)][.] (2)
_k=1_
Y
where θ[(3)] is the parameters of LMM Flmm[(3)] [,]
_⊕_ represents the string concatenation operation.
**_m = {w1[m][, w]2[m][, . . ., w]N[m]m[}][ represents the mind]_**
of design or problem-solving steps generated by
LMM, and a = {w1[a][, w]2[a][, . . ., w]N[a] _a[}][ represents the]_
original problem or final answer generated by the
LMM.
**3.2** **Symbolic Experience: Learning through**
**Abstractions**
This paper defines symbolic experience as the background knowledge related to the target domain, or
the prerequisite knowledge required to carry out
the target task. Symbolic experience does not directly help the model solve specific tasks, but it
-----
Figure 2: The diagram of the three-stage fine-tuning framework.
provides strong support. For mathematical problem generation, we summarize symbolic experience into four types for production: book knowledge, graph knowledge, arithmetic knowledge, and
general knowledge.
The data sources of book knowledge include
textbooks, lecture notes, teachers’ books, pedagogy, and psychology books, aiming to build teaching concepts and supplement subject knowledge.
Through methods such as web crawling, OCR, and
manual annotation, we complete data collection
and pre-processing (de-duplication, noise reduction, etc.) via both online and offline channels. The
number of book knowledge tokens sorted out in
this work is approximately 140M.
We construct a large heterogeneous subject
knowledge graph, where the node types include
grade, knowledge points, concept descriptions, and
example problems. This graph encompasses 1, 225
knowledge points and related concepts from elementary to junior high school, providing approximately 18, 000 example problems. To train LMM
using structured data, we design a graph sampling
method based on random walk to extract diversified
and differentiated disciplinary information. Then
GPT4(V) is used to transform the edge information
into a concatenated text, thereby generating graph
**knowledge for symbolic experiences. Specifically,**
the heterogeneous subject knowledge graph is represented as G =< _Nc, Ng, Nd, Np_ _, E >, where_
_{_ _}_
_Nc, Ng, Nd, Np represent the node sets of knowl-_
edge points, grade, concept descriptions, and related example problems. E is the set of edges between all nodes. The generation process of graph
knowledge can be seen in Algorithm 1, which generates two types of training samples: a whole link
learning sample (Sample_1) is formed as a fourtuple {grade, knowledge point, concept description,
_example problem}, and a relationship learning sam-_
ple (Sample_2) formed by the concatenation of
multiple adjacent knowledge points, totaling 220M
tokens.
The function of arithmetic knowledge is to
compensate for the shortcomings of LMM in arithmetic, to reduce the probability of numerical errors
occurring in the mathematical reasoning process.
It is an equation consisting of pure numbers and
mathematical operators. We directly use the arithmetic dataset proposed by Yang et al. (Yang et al.,
2023). This dataset is carefully designed, containing not only operations such as addition, subtraction, multiplication, division, and exponentiation,
but also various numerical formats such as integers, decimals, percentages, fractions, and negative
numbers. In this work, approximately 200M tokens are extracted as fine-tuning data for arithmetic
knowledge.
We extracted approximately 220M tokens of
generic data (including plain text, single-turn, and
multi-turn Q&A) from open-source corpora, such
as Wikipedia, SkyPile-150B(Wei et al., 2023),
MOSS(Sun et al., 2023) and BELLE(Ji et al., 2023),
-----
**Algorithm 1 Graph Knowledge Generation**
**Input: G =< {Nc, Ng, Nd, Np}, E >,**
_D1 = ∅, D2 = ∅_
**Output: Sample_1, Sample_2**
1: ni = random(Nc)
2: D1 = D2 = {nc}
3: nj = random(Ng)
4: if eij is not None then
5:6: end ifD1 = D1 ∪{nj}
7: nj = random(Nd)
8: if eij is not None then
10:9: end ifD1 = D1 ∪{nj}
11: for k = 1; k < 3; k + + do
12: _nj = random(Nd)_
13: **if eij is not None then**
15:14: **end ifD2 = D2 ∪{nj}**
16: end for
17: for k = 1; k < 5; k + + do
19:18: **ifn ej =ij is not None random( thenNc −{ni})**
21:20: **end ifD2 = D2 ∪{nj}**
22: end for
23: Sample_1 = GP T 4V (D1)
24: Sample_2 = GP T 4V (D2)
To construct stem generation experience, we
first collect exercises and test items covering all
grades from elementary to junior high school. Next,
based on manual annotation methods, we extracted
key information from math problems in several
dimensions including educational grade, problem
type, knowledge points, and difficulty, and deduced
problem requirements in reverse. Finally, we constructed a query-problem pair, with manual writing
examples of mind of design, and used GPT4(V) for
bulk supplementation of question making ideas in
a few-shot manner. The final data form is {problem
_requirement, mind of design, original problem}._
To construct problem solving experience, we
hire normal school students to write analyses and
answers for the collected mathematical problems.
However, due to differences in cognitive levels and
writing styles between individuals, it is difficult to
align the granularity of the analyses. To generate
fine-grained analyses, we use GPT4(V) to generate
high-quality analyses with consistent writing styles
based on manually parsed data. Three generation
methods are proposed:
1. The task requires GPT4(V) to directly solve
the problem: {q} ↣ _{s}._
2. The task requires GPT4(V) to fill in the middle
process when both the problem and answer
are given: {q, a} ↣ _{s}._
3. When the complete problem, analyses, and answer are given, GPT4(V) is required to rewrite
the analyses: {q, s, a} ↣ _{s[′]}._
We chose the second method as the data production method for this stage due to its stability. The
final data form is {mathematical problem, mind of
_solution, final answer}._
**Failure experience is mainly generated by**
LMMs that have not been domain-adapted. First,
a collaborative environment consisting only of
LMMs is built, among which GPT4(V) plays the
role of the discriminator, and multiple LMMs (such
as Qwen-VL-Chat, Yi-VL-6/34B, etc.) play the
role of generators. Secondly, two generators are
randomly assigned to complete the task of mathematical problem generation, and then the discriminator guides and evaluates the degree of completion.
Finally, the summarized procedural data forms a
sample in the format {task instruction, collabora_tion information, guidance feedback}._
In this stage, the data pertaining to the iconic
experience is learned by the LMM in the form of
instruction Tuning. All data is arranged in a query
as the general knowledge in symbolic experience.
The main role of this knowledge is to slow down
the forgetting phenomenon caused by continued
pre-training.
This stage processes all the data associated with
symbolic experience as pre-training form and infuses it into the LMM for learning, i.e., no masking
of data content is undertaken. The backpropagation of model training computes loss from the first
token of the input. Assuming the input sample is
**_x, the loss function at this stage is as follows:_**
**_Loss1st(θ[(0)]) =_**
_−_
log P (xi **_x<i; θ[(0)])._**
_|_
_xXi∈x_
(3)
**3.3** **Iconic Experience: Learning through**
**Observation**
The iconic experience is defined as the data generated by the subject in the process of performing the
target task, which includes not only human experts
proficient in the target task but also the large model.
Injecting the iconic experience aims to allow LMM
to learn mathematical problem generation from humans and improve upon the failed reasoning data
produced by other LMMs. This paper summarizes
the iconic experience into three types of production: the experience of stem generation, problem
solving, and failure.
-----
response pair, and a masking process is applied to
the query part. The backward propagation of model
training only starts calculating loss from the first
token of the response. Suppose the query-response
pairs are defined as q : a, the model input sequence
is x = {q ⊕ **_a}. The loss function at this stage is_**
as follows:
**_Loss3st(θ[(2)], θ[(3)]) =_** Ex,yw,yl _D[log σ(_
_−_ _∼_
_β log_ **_[P][ (][y][w][|][x][;][ θ][(3)][)]_**
**_P (yw_** **_x; θ[(2)])_** **_P (yl_** **_x; θ[(2)])_** [)]][.]
_|_ _[−]_ _[β][ log][ P][ (][y][l]|[|][x][;][ θ][(3)][)]_
(5)
the θ[(3)] uses θ[(2)] as the initial solution, for the
same input x, yw and yl represent the preferred
solution and the non-preferred solution.
**4** **Experiments**
**4.1** **Implementation**
We conduct the “Cone of Experience” enhanced
three-stage fine-tuning based on the well-trained
Qwen-VL-Chat(Bai et al., 2023) provided by Alibaba Cloud. For all fine-tuning stages, Adam with
different learning rates is used as the optimizer, and
the gradient truncation threshold is set to 0.5. We
incorporate a warmup ratio of 0.05 and employ the
batch size of 64. To control overfitting, we apply a
weight decay of 0.1. The Max token is uniformly
set to 2, 048, and the data is spliced or truncated in
the pre-processing stage to improve training efficiency or reduce information loss. In addition, we
employ deepspeed with ZeRO-2 stage(Rajbhandari
et al., 2020) to improve parallel efficiency for speed
up training.
In the first and second stages, we use LoRA
(Hu et al., 2021) to perform parameter-efficient
fine-tuning, then set rank, alpha and dropout to 16,
32 and 0.05. All linear layers (including the image encoder) of LMM except the head layer are
designated to apply the LoRA adapter. Among
them, the learning rate of the first stage is set to
2 × 10[−][5], and halved in the second stage. We
use the DPO(Rafailov et al., 2024) algorithm to
inject direct experience for learning reasoning preferences in the third stage. The learning rate is
5 × 10[−][5], the DPO smoothing value is 0.1. To
ensure reproducibility, the random seed is set to
42 during the whole experiment. The three-stage
fine-tuning is performed on 8 NVIDIA A800-80G.
For one epoch, the three-stage fine-tuning takes
about 200, 50, and 20 GPU hours. In the test stage,
the inference parameters of LMM are uniformly
set top_k to 20, top_p to 0.7, repetition_penalty to
1, and temperature to 0.3.
**4.2** **Dataset and Baseline**
The datasets used are shown in Table 1.
GSM8K(Cobbe et al., 2021) is an English single
**_Loss2st(θ[(1)]) =_**
_−_
log P (xi **_x<i; θ[(1)])._**
_|_
_xXi∈a_
(4)
**3.4** **Direct Experience: Learning by Doing**
The direct experience is defined as the procedural
data generated when the fine-tuned object carries
out the target task with results feedback. Such experience aims to correct the inference preference
of the LMM with higher-order domain values, allowing it to embodied evolve during the practice.
Firstly, we design a set of task instructions for
three subtasks (CG, AG, and FS). For CG, the focus of the prompt design is to highlight the controllable elements in the generation process. This
paper mainly considers four controllable factors in
the problem generation process: grade, problem
type, knowledge point, and difficulty level, and
requires giving out the mind of design. For AG,
the prompt design focuses on asking the model to
first understand the seed problem to initially judge
the important elements such as problem type and
knowledge points, and then give the mind of design and rewrite the problem in the form of chain
of thought. For FS, the core concept of the prompt
design is to clearly require the model to generate a
detailed analysis process rather than just outputting
an answer. All prompts designed in this work for
the three tasks are shown in Figure 3.
Secondly, the LMM can produce multiple different responses to the same query due to the randomness of its reasoning. We order the preferences of multiple responses corresponding to the
same instruction. This paper utilizes human preferences (manual annotation) and model preferences
(GPT4(V) generation) during the preference ranking process. The final data form is {task instruc_tion, high preference response, low preference re-_
_sponse}._
The fine-tuning stage uses the direct preference
optimization (Rafailov et al., 2024) (DPO) algorithm to infuse direct experience into LMM, the
loss function is as follows:
-----
**Analogy Generation**
模式:举一反三;
输入:种子试题-“种子试题(图-文)” ;
任务:请你模拟输入中给定的种子试题的出题思
路,构造一道类似的数学试题。要求尽可能采用
和例题不一样的问题场景,允许扩大或减小题目
难度、允许增设知识点、允许改变题目类型。
请尽力理解种子试题的出题思路,并一步一步地
详细记录你的出题思路,其中出题思路必须包括
知识点、题型、难度、题干、数据等方面的设计
思路,最后给出你的原创试题 。
**Fine-grained Solving**
模式:试题解答;
输入:“题目(图-文)”;
任务:请对输入中给定的数学
题进行严谨具体的解答,要求
一步一步地详细记录解题思路
和过程。
**Controllable Generation**
模式:可控生成;
输入:参考图-“图片”、学段范围-“学段”、
知识点-“知识点”、题型-“题型”、难易
程度-“难易程度”;
任务:请你根据输入中给定的学段范围、知
识点、题型和难易程度,构造一道符合要求
的原创数学题。要求一步一步严谨具体的记
录出题思路和原创试题,其中出题思路必须
包括知识点、题型、难度、题干、数据等方
面的设计思路。
Figure 3: Prompt of the three tasks.
modal math word problem (MWP) dataset, containing 7, 473 training samples and 1, 319 test samples.
It mainly tests the reasoning and arithmetic abilities of primary school math. TAL-SCQ5K-CN[1]
is a Chinese single-modal multiple-choice problem (MCP) dataset for K12 math, including 3, 000
training samples and 2, 000 test samples.
This work builds a Chinese multi-modal math
problem dataset CMM12K, which includes 6, 000
single-modal math problems and 6, 000 multimodal math problems, covering most of the knowledge points in K12 math from primary school to
junior high school. This dataset contains four types
of problems: MCP, math fill-in-the-blank problem
(MFP), MWP, and math proof problem (MPP). The
training set of CCM12K is divided into 10, 000
samples, and the development set and test set are
1, 000 samples each.
Five open source LMMs are specified as baselines, covering different parameter levels, including Qwen-VL-Chat(7B)(Bai et al., 2023), Yi-VL6B/34B(Young et al., 2024), LLaVA-1.6(7B)(Liu
et al., 2024a), CogVLM(17B)(Wang et al., 2023b).
It should be noted that in the three-stage fine-tuning,
all datasets are aligned with Chinese. The training sets of each dataset participate in the construction of iconic experience and the direct experience mainly depends on the development set. The
test set is completely isolated from all fine-tuning
stages.
**4.3** **Metrics**
This study designs three types of evaluation criteria
for different capabilities of LMM.
**Scoring mode based on GPT4(V). Multiple**
scoring dimensions are designed for controllable
**generation (CG), analogy generation (AG), and**
**fine-grained solving (FS). The scoring dimensions**
of CG include language fluency (LF) (both math
1https://github.com/math-eval/TAL-SCQ5K
ematical terms and formulas), logical correctness
(LC), content completeness (CC) (both ideas and
stems), knowledge point relevance (KR), difficulty
appropriateness (DA) and type adaptability (TA).
The scoring dimensions of AG include language fluency (LF), logical correctness (LC), content completeness (CC), reasoning rationality(RR), and seed
relevance (SR). The scoring dimensions of FS include language fluency (LF), logical correctness
(LC), analytical completeness (AC), and answer
accuracy (AA). The GPT4(V) is required to give a
score and reason in the range of 1 to 10 according
to the dimension.
**Arena mode based on GPT4(V). Considering**
the subjectivity of mathematical problem generation, GPT4(V) is introduced as a referee to comprehensively rule on different responses of the same
query from aspects such as accuracy, fluency, and
values. Specifically, we calculate a rating value
for each LMM to represent the ability on a certain task. During the judging process by GPT4(V),
the ELO rating algorithm(Elo, 1967; Zheng et al.,
2024) is used to update the rating value of the participating LMM. Assume the initial ELO rating
value is 1, 000. In M rounds of competition, two
LMMs (called LMM-x and LMM-y) are randomly
selected to reply to the same query each time. According to the ruling results of GPT4(V), the rating
value is calculated as follows:
1
(6)
1 + 10[(][R][x][−][R][y][)][/][400][,]
_Ex =_
_′_
_Ri_ [=][ R][i] [+][ K][ ·][ (][P K][(][i][)][ −] _[E][i][)][.]_ (7)
where Rx and Ry respectively represent the rating values of LMM-x and LMM-y in the previous round, Ex and Ey = 1 − _E′_ _x represent_
the current expected rating value. Ri [(][i][ ∈{][x, y][}][)]
represents the updated rating value of LMM-i,
**_P K(i) ∈{0, 1} is a boolean function that identi-_**
-----
Table 1: Statistics of datasets.
**Dataset** **Language** **Modal** **Type** **#Train.** **#Dev.** **#Test.**
GSM8K En → Zh Single MWP 7, 473 - 1, 319
TAL-SCQ5K-CN Zh Single MCP 3, 000 - 2, 000
CMM12K Zh Multi MCP, MFP, MWP, MPP 10, 000 1, 000 1, 000
fies whether LMM-i win in this round. K represents the K-factor, which defaults to 4 and controls
the change rate of rating.
**Objective evaluation indicators. For problems**
with clear answers (this work refers to MCP, MFP,
and MWP), the accuracy (ACC) is returned through
matching to report the solving performance of
LMM. For the MPP, BLEU-1/2/L(Papineni et al.,
2002) and ROUGE-1/2/3/4(Lin, 2004) are used to
approximate testing.
**5** **Result and Discussion**
**5.1** **Performance of Controllable Generation**
**Scoring mode. We employed GPT4(V) to evaluate**
the responses of LMMs to CG tasks on the test
set and subsequently reported the average scores
across six dimensions. As illustrated in Figure 4(a),
our model outperforms all other baselines in 4 of 6
dimensions(LF, LC, KR, and DA).
Figure 4(b) shows that our model performs comparably to Yi-VL-34B across the six dimensions of
CG tasks. Models with comparable parameter levels, such as Qwen-VL-Chat (7B), LLaVA-1.6 (7B),
and Yi-VL-6B, demonstrate slightly inferior performance, while the larger parameter-scale model,
CogVLM with 17B parameters, exhibits relatively
lower performance.
Although our model slightly lags behind Yi-VL34B in terms of content completeness and problem type adaptability, it’s important to note that
our parameter count is approximately five times
smaller than that of Yi-VL-34B. Therefore, this
discrepancy may arise from limitations imposed
by the scale of parameters, which could hinder the
comprehension and processing of contextual information.
Through a three-stage fine-tuning process guided
by the “Cone of Experience”, our model can effectively focus on generating responses that prioritize
controllable factors such as language fluency, logical correctness, knowledge point relevance, and
difficulty appropriateness.
**Arena mode.** We conducted approximately
3, 600 competitions, where two LMMs were ran
domly selected for an anonymous battle, and the
winner was determined by GPT4(V). As shown
in Figure 5(a), upon completing all competitions,
our model’s ELO rating significantly surpassed the
baselines with a median value of 1, 185.
The heatmap illustrates the win rates of battles
between various LMMs, revealing that our model
exhibits absolute superiority when battling against
Qwen-VL-Chat, Yi-VL-6B, and LLaVA-1.6. Moreover, even when pitted against Yi-VL-34B, our
model maintains a slight edge with a 2% advantage.
This outcome suggests that while larger parameter sizes may enhance text comprehension and
generation capabilities, our model, after undergoing multi-modal fine-tuning guided by the “Cone
of Experience”, exceeds the performance of a 34Bparameter LMM in CG task without altering its
parameter scale.
**5.2** **Performance of Analogy Generation**
**Scoring mode. In the AG task, our model demon-**
strates robust capabilities and absolute superiority. As depicted in Figure 4(a) and (b), firstly,
our model comprehensively outperforms all baselines in various evaluation dimensions of AG, particularly surpassing the strong baseline Yi-VL34B with a parameter size five times larger than
ours. Secondly, our backbone model Qwen-VLChat ranks relatively lower among the six models, around the 4th to 5th position, trailing behind
models with similar parameter scales such as YiVL-6B and LLaVA-1.6. However, following the
three-stage fine-tuning guided by the “Cone of Experience”, its AG capabilities have significantly
improved, with an average score increase of approximately 60% across all dimensions.
**Arena mode. As illustrated in Figure 5(b), after**
undergoing 3, 600 random competition rounds, our
model’s ELO Rating in AG tasks significantly surpasses all baselines. From the win rate heatmap, it’s
evident that our model exhibits absolute dominance
when facing models with equivalent or double the
parameter scale, defeating Qwen-VL-Chat, Yi-VL6B, CogVLM, and LLaVA-1.6 with win rates of
-----
FS-LF FS-LC FS-AC FS-AA
CG-LF CG-LC CG-CC CG-KR CG-DA CG-TA
AG-LF AG-LC AG-CC AG-RR AG-SR
(a) (b)
Rank
Figure 4: Performance of our model and baselines on a broad range of problem-solving and generation fine-grained
indicators. (a) The average score on 15 indicators in three tasks(CG, AG, and FS); (b) The rank of models’ scores
in each evaluation indicator. Here, each evaluation indicator is expressed as ‘task-dimension’, and the number in
the radar chart refers to the average score of models, we only present the scores of our model and Yi-VL-34B
for visualization purposes. For instance, the label ‘CG-DA’ depicted at the bottom of the radar chart denotes the
difficulty appropriateness (DA) measure for the controllable generation (CG) task, where our model attains scores
of 8.57 (rank 1), while Yi-VL-34B obtains 7.89 (rank 2).
97%, 96%, 100%, 93% respectively. Even when
confronted with baselines five times larger in parameter size, our model maintains a win rate of
73%. AG tasks hold educational value in demand
for problem generation. Although our backbone
model may lack training in this specific skill, our
model successfully acquires it through the threestage fine-tuning process. This outcome further
validates the superiority of the proposed framework
and model in this study.
**5.3** **Performance of Fine-grained Sovling**
**Scoring mode. We report the scores for 4 dimen-**
sions in the FS task of all LMMs on the test set. The
average scores across various evaluation dimensions are presented in Figure 4(a), where our model
achieved the state-of-the-art (SOTA) in 3 of 4 evaluation dimensions(LC, AC, and AA), falling slightly
short only in language fluency (LF) compared to
Yi-VL-34B. Figure 4(b) depicts the ranking across
all evaluation dimensions, further validating the superiority of our model, which maintains an absolute
lead in most dimensions with a relatively smaller
parameter size (7B). Moreover, by comparing with
the backbone model Qwen-VL-Chat, we can con
clude that the three-stage fine-tuning guided by the
“Cone of Experience” significantly enhances its FS
capabilities.
**Arena mode. As illustrated in Figure 5(c), af-**
ter 3, 600 competition rounds, our model triumphs
over all baselines with the highest ELO Rating.
The average ELO rating leads the backbone model
Qwen-VL-Chat by approximately 20%, and also
maintains an advantage against Yi-VL-34B. However, analyzing from the perspective of win rates
reflected in the heatmap, our model lags behind
Yi-VL-34B with a slight disadvantage, as it has a
win rate of only 49%.
Based on the scoring results of GPT4(V) and the
arena outcomes, it can be concluded that, in the
FS task, our model demonstrates significant performance improvement compared to the backbone
model Qwen-VL-Chat, surpassing baselines of the
same parameter scale comprehensively, and even
shows certain advantages compared to baselines
approximately five times its size.
**5.4** **Result of Objective Evaluation**
**Performance of Objective Problems. As shown**
in Table 2, our model dramatically outperforms
-----
(a) (b) (c)
Figure 5: The statistics of ELO rating over 3, 600 rounds and the win rate between models. The three subfigures,
(a), (b), and (c), respectively represent tasks FS, CG, and AG. For each subfigure, the top section represents the
ELO rating, here we sorted the models based on their ELO rating medians, while the bottom section represents the
win rate. Abbreviations for Yi-VL-6B/34B, LLaVA1.6, and Qwen-VL-Chat, denoted as Yi-6/34, LLaVA, and Qwen
respectively.
Table 2: Performances of each LMM on GSM8K, TAL-SCQ5K-CN, and CMM12K. Here Single/Multi indicates
single modal/multi modal. All results are reported in terms of Acc (%), bold indicates optimal performance while
underline indicates suboptimal performance.
**CMM12K**
**Model** **GSM8K** **TAL-SCQ5K-CN** MCP MFP MWP
Total
Single Multi Single Multi Single Multi
Qwen-VL-Chat 18.04 16.20 24.12 27.33 22.00 29.61 17.33 29.13 19.33
Yi-VL-6B 22.52 6.55 21.05 14.00 18.67 32.19 18.67 28.16 14.67
LLaVA-1.6 18.65 10.05 19.42 17.33 24.67 27.04 11.33 22.82 13.33
CogVLM 8.87 5.35 14.11 4.67 22.67 21.89 7.33 17.48 10.67
Yi-VL-34B **38.66** 7.60 24.54 16.67 16.67 **38.62** 19.33 33.98 22.00
Ours 28.89 **22.05** **33.84** **35.33** **34.00** 35.62 **29.33** **35.44** **33.33**
the backbone model Qwen-VL-Chat in terms
of accuracy on GSM8K and TAL-SCQ5K-CN
(+10.62%, +5.85%), achieving state-of-the-art
(SOTA) on TAL-SCQ5K-CN. Although Yi-VL34B leads on GSM8K, its parameter size, which
is 5 times larger than ours, implies greater training
cost and time.
On CMM12K, our model’s overall score of
33.84% remarkably exceeds all baselines, with approximately an 8% performance advantage over
the second-place Yi-VL-34B. Specifically, we conducted statistics on two modalities and three problem types, totaling 2×3 = 6 categories. The results
show that our model achieved SOTA in 5 of 6 categories, only slightly lagging behind Yi-VL-34B
in single modal MFP by a small margin (−3%).
Compared with the baseline of the same parameter size, our model leads in all types of problems.
For CogVLM, which has twice the parameter size
of ours, our model maintains a lead of more than
15% in all tasks. In summary, our model achieves
relatively excellent problem-solving performance
in all types of problems with a smaller parameter
size (approximately 20% of Yi-VL-34B).
**Performance of MPP. In regards to the MPP**
in the CMM12K dataset, we draw from the evaluation logic of machine translation, comparing each
LMM’s response with the standard answer and calculating BLEU and ROUGE scores. The standard
answer here is derived from manual annotation, en
-----
Table 3: Performances of each LMM on CMM12K with MPP, bold indicates optimal performance (higher is better
for all metrics).
**Category** **Metric** **Qwen-VL-Chat** **Yi-VL-6B** **LLaVA** **CogVLM** **Yi-VL-34B** **Ours**
ROUGE@1 0.38 0.32 0.35 0.20 0.38 **0.80**
ROUGE@2 0.25 0.20 0.21 0.05 0.20 **0.67**
ROUGE@L 0.32 0.27 0.23 0.11 0.27 **0.69**
BLEU@1 0.28 0.32 0.27 0.18 0.34 **0.71**
BLEU@2 0.17 0.15 0.11 0.05 0.16 **0.62**
BLEU@3 0.12 0.10 0.07 0.02 0.11 **0.54**
BLEU@4 0.08 0.07 0.05 0.01 0.07 **0.47**
ROUGE@1 0.37 0.32 0.37 0.21 0.37 **0.81**
ROUGE@2 0.24 0.18 0.21 0.06 0.20 **0.68**
ROUGE@L 0.30 0.26 0.22 0.12 0.25 **0.67**
BLEU@1 0.24 0.29 0.25 0.18 0.30 **0.69**
BLEU@2 0.14 0.14 0.10 0.05 0.14 **0.60**
BLEU@3 0.09 0.09 0.06 0.02 0.09 **0.52**
BLEU@4 0.07 0.06 0.04 0.01 0.06 **0.46**
ROUGE@1 0.39 0.33 0.33 0.18 0.40 **0.80**
ROUGE@2 0.26 0.20 0.20 0.04 0.23 **0.66**
ROUGE@L 0.34 0.28 0.25 0.10 0.31 **0.67**
BLEU@1 0.31 0.33 0.29 0.18 0.38 **0.68**
BLEU@2 0.18 0.15 0.13 0.05 0.19 **0.59**
BLEU@3 0.13 0.10 0.08 0.02 0.13 **0.51**
BLEU@4 0.09 0.07 0.05 0.01 0.09 **0.43**
Total
Single-Modal
Multi-Modal
compassing not only the geometric elements and
their relationships but also the complete proof process. By calculating BLEU and ROUGE, we can
approximately determine whether the output of the
LMM is in accordance with mathematical grammar and proof logic. Table 3 displays the response
quality of our model and the baseline model on
single-modal and multi-modal proof problems. The
results indicate that our model is superior to all
baselines as far as the response quality of the proof
problems is concerned.
**5.5** **Ablation Study**
To validate the effectiveness of the three-stage finetuning framework proposed in this paper, we designed the following ablation experiments. We
refer to the model after the i[th] stage of fine-tuning
as Si, where i ∈{1, 2, 3}. We preserved all checkpoints of the fine-tuning stages: S0 - the original
backbone model Qwen-VL-Chat without any finetuning. S1 - the model after the first stage of finetuning, injected with symbolic experience. S2 based on S1, the model after the second stage of
fine-tuning, infused with iconic experience. S3
- based on S2, the model after the third stage of
fine-tuning, incorporating direct experience.
We first obtained these four models’ responses
on the test set regarding the CG, AG, and FS tasks,
and scored them based on GPT4(V) in 15 finegrained dimensions. Figure 6 shows the score
changes on 15 dimensions for the three types of
tasks. The results indicate that with the deepening
of the three-stage fine-tuning, the model’s scores
in all dimensions show an increasing trend.
We calculated the absolute performance improvements at each fine-tuning stage and reported
them in Table 4. The results show that on most
capability dimensions, ∆2 = max{∆1, ∆2, ∆3},
meaning that the second stage contributed the most
to the performance improvement in the three-stage
fine-tuning framework based on the “cone of experience”. We also observe some exceptions, including
the four dimensions FS-EC, AG-CC, AG-RR, and
AG-SR, whose common feature is that the original backbone model performs poorly, and the first
stage of training plays a key role in improving the
performance on these dimensions. In all dimensions of each task, further improvements in model
performance can be achieved through continuous
injecting of direct experience in the third stage of
fine-tuning.
**5.6** **Case Study**
To demonstrate the strength of our model, we have
selected some examples of CG, AG, and FS tasks
respectively. Previous results, such as Figure 4,
have already proven that Yi-VL-34B - a model with
more than five times the number of parameters as
ours, is a comparable competitor. Thus, for each
task, we show the difference in response quality
-----
Avg. Score@AG Avg. Score@FS
LF
LC
AC
AA
4
3
|C|Col2|Col3|
|---|---|---|
|C R|||
|A A|||
||||
||||
|Col1|Col2|Col3|
|---|---|---|
||||
||||
||||
||||
|Avg. Score@CG 9 9 LF LF LC 8 LC 8 CC CC KR 7 RR Score 7 DA SR TA 6 6 5 5 S0 S1 S2 S3 S0 (a)|Col2|Avg. Score@AG|Col4|Col5|Col6|
|---|---|---|---|---|---|
||C R|||||
||R|||||
|||||||
|||||||
|||S1 S2 S3 (b)||||
Figure 6: The average scores across various dimensions for our model at each stage of training.
**CG** 输入:学段范围-8年级、知识点-“相似三角形的判定”、题型-证明题、 输入:一个缺角的三角形ABC残片如图,量得∠A=40°,
难易程度-简单; **AG** ∠B=65°,则这个三角形残缺前的∠C的度数为()
任务:请你根据输入中给定的图片、学段范围、知识点、题型和难易程度, A.55° B.65° C.75° D. 85°
构造一道符合要求的原创数学证明题。 任务:请你模拟输入中给定的种子试题的出题思路,构造
出题思路:本题考查相似三角形的判定定理, 熟练掌握相似三角形的判定定理是解题的关 一道类似的数学试题。
键,我准备首先说明图中角、点、边的数值关系和位置关系,然后在此基础上考察证明2 种子题的出题思路:种子题的主要知识点是三角形的内角和定理,以及三角形的
个指定的三角形相似。
## ⽮量图原创试题:∼" ABC. 如图,-!顺序已修改、简称已再次修改ABC中,D、E分别是AB、AC上的点,且∠AED=∠B,求证: " AED 性质。题目中给出了三角形ABC的两个内角A和B的度数,要求求解三角形残缺前的角C的度数。这是一个典型的三角形内角和的问题,通过已知的两个内角的度数,可以求解出第三个内角的度数。题目的难度适中,主要考察学生对三角形内角和
定理的理解和应用。
出题思路:根据学段范围、知识点、题型和难易程度,构造一道证明题。 原创题的出题思路:原创题的知识点仍然是三角形的内角和定理,以及三角形的
原创试题:证明:在图中,AD=EC,∠A=∠E,那么△ADC和△AEC是相似三角形。
性质。为了增加题目的难度,我打算先让学生通过已知的两个内角的度数,求解
出第三个内角的度数,然后在此基础上考察该三角形的形状。题目的场景我打算
首先,我们需要在图中表示出相似三角形。我们可以通过构造两个等边三角形来表示相似 设定为一个三角形的风筝,风筝的形状和三角形的性质相吻合,这样可以增加
三角形。
题目的趣味性。题目的数据我打算设置为∠A=40°,∠B=70°,这样可以保证题
原创试题:请证明图中的两个三角形是相似的。 目的解答过程是可行的。
**(a) Case of Controllable Generation** 原创试题:小明拾到了一个残缺的三角形的风筝,量得∠A=40°,∠B=70°,则
这个三角形风筝残缺前的形状为___三角形。
**FS** 输入:且AB=10cm,则DB=()如图所示,C、D是线段AB上两点,若AC=3cm,C为AD中点 种子试题的出题思路:这是一道关于三角形的题目,难度适中。题目中给出一个
A.4cm B.5cm C.6cm D.7cm 已知的三角形,要求计算出这个三角形的未知角的度数。这个题目可以考察学生
任务: 请对输入中给定的数学题进行严谨具体的解答,要求一步 对三角形内角和的掌握,以及根据已知条件计算未知角的能力。
一步地详细记录解题思路和过程。 原创试题的出题思路:这是一道关于相似三角形的题目,难度较大。题目中给出
详细解析步骤如下: 两个不同的三角形,要求判断这两个三角形是否相似,并给出判断的依据。这个
这是一道关于线段长度的问题。根据题意,我们知道AC=3cm,C为AD中点,所以 题目可以考察学生对相似三角形的判断能力,以及根据图形的性质进行推理的能
AD=2AC=6cm。 力。
又因为AB=10cm,所以BD=AB-AD=10cm-6cm=4cm。所以,答案是A,即DB=4cm。 原创试题:在直角三角形ABC中,∠A=40°,∠B=65°,求∠C的度数。
因此,答案是A。
种子试题的出题思路:该题主要考察的是三角形的内角和定理,即三角形三个内
详细解析步骤如下: 角的和为180°。通过给出一个残缺的三角形,要求计算残缺前的∠C的度数,从
根据题目,我们知道AC=3cm,C为AD中点,所以AD=AC=3cm。 而考察学生对三角形内角和的理解和应用。
1.因为AB=10cm,所以DB=AB-AD=10cm-3cm=7cm。 原创试题的出题思路:我决定构造一个类似的试题,但采用不同的场景。我选择
所以,DB=7cm。 了一个等腰三角形,并给出其顶角和底边长度,要求计算其底角的度数。这样,
我既保留了三角形内角和定理的考察,又改变了问题场景,提高了试题的新颖性。
根据题意,AC=3cm,C为AD中点,所以CD=2cm,又因为AB=10cm,所以DB=10cm-2cm=8cm。 原创试题:如图,等腰三角形ABC的顶角∠A为60°,底边BC的长度为4cm,求
因此答案是D 7cm。 ∠B的度数。
**(b) Case of Fine-grained Solving** **(c) Case of Analogy Generation**
**CG**
Figure 7: Case of the three tasks. The key correct parts of responses are highlighted in green and the incorrect ones
in red.
between our model, the backbone model QwenVL-Chat, and Yi-VL-34B when using the same
prompt, the case details refer to Figure 7.
**Case of CG** In the example shown in Figure 7(a),
LMM was asked to generate one problem based
on the given planar geometric picture. Our model
accurately captured the geometric elements in the
picture and expressed the test problem using the
correct mathematical language. In contrast, QwenVL-Chat failed to comprehend the content of the
given picture, erroneously providing the condition
(△ADC ∼△AEC, but both ADC and AEC are
not triangles). For Yi-VL-34B, the problem it constructed was not based on the given image, hence
not aligning with requirements.
**Case of FS** Given this problem, Qwen-VL-Chat
correctly understood the elements in the picture,
but its erroneous reasoning steps (AD=AC=3cm
is given, but actually AD=2AC=6cm) led to an
incorrect final result. Yi-VL-34B made similar
mistakes as Qwen-VL-Chat. However, our model
first parsed the problem requirements, and then extracted the geometric elements of the given picture,
and finally correctly reasoned step by step according to the problem to arrive at the correct answer
(DB=4cm).
**Case of AG** We require LMMs to simulate the
seed problem and construct a new problem. Each
LMM first analyzes the ideas for the construction
of the seed problem and then constructs a problem, and they are also asked to explain the thought
process behind the constructed problem.
Our model first understands the meaning of
the problem, parses the content of the knowledge
-----
Table 4: Performance improvements(∆) of each evaluation dimension at each stage. Here, ∆i represents the
improvement in average score relative to the (i − 1)[th]
stage after the i[th] stage training, the bold indicates
max ∆1, ∆2, ∆3 .
_{_ _}_
Task dimension ∆1 ∆2 ∆3
**LF** 0.385 **0.510** 0.195
**LC** 0.560 **0.935** 0.190
**FS**
**AC** **1.300** 0.560 0.075
**AA** 0.315 **0.765** 0.090
**LF** 0.365 **1.945** 0.380
**LC** 0.655 **2.140** 0.370
**CC** 0.290 **1.200** 0.595
**CG**
**KR** 1.170 **1.375** 0.485
**DA** 0.600 **1.745** 0.485
**TA** 0.125 **1.920** 0.435
**LF** 0.535 **2.640** 0.070
**LC** 1.120 **2.450** 0.105
**AG** **CC** **2.045** 1.855 0.020
**RR** **2.150** 1.575 0.125
**SR** **2.330** 0.810 0.070
bolic, iconic, and direct. Based on this, we design
a three-stage fine-tuning framework to enhance the
capabilities of problem generation and problem
solving within a single LMM to meet the requirements of educational applications. Moreover, a
Chinese multimodal mathematics problem dataset
(CMM12K) is built to alleviate the scarcity of Chinese multimodal corpora in this field. Extensive
experiments have demonstrated the advancement
and effectiveness of the proposed model. In the future, we will explore retrieval-enhanced generation
methods based on model recall behavior, since the
proposed direct experience can potentially serve as
LMM historical memory.
**Acknowledgments**
This work was financially supported by the National Science and Technology Major Project
(Grant Nos. 2022ZD0117105), National Natural Science Foundation of China (Grant Nos.
62293554 and 62307015), China Postdoctoral
Science Foundation (Grant Nos. 2023M741304
and 2023T160256), Hubei Provincial Natural Science Foundation of China (Grant Nos.
2023AFA020 and 2023AFB295), Fundamental Research Funds for the Central Universities (Grant
Nos. CCNU23XJ007) and Knowledge Innovation
Program of Wuhan-Shuguang Project (Grant Nos.
2023010201020390).
**References**
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama
Ahmad, Ilge Akkaya, Florencia Leoni Aleman,
Diogo Almeida, Janko Altenschmidt, Sam Altman,
Shyamal Anadkat, et al. 2023. Gpt-4 technical report.
_arXiv preprint arXiv:2303.08774._
Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang,
Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou,
and Jingren Zhou. 2023. Qwen-vl: A frontier large
vision-language model with versatile abilities. arXiv
_preprint arXiv:2308.12966._
Yixuan Cao, Feng Hong, Hongwei Li, and Ping Luo.
2021. A bottom-up dag structure extraction model
for math word problems. In Proceedings of the
_AAAI conference on artificial intelligence, volume 35,_
pages 39–46.
Jiaqi Chen, Jianheng Tang, Jinghui Qin, Xiaodan Liang,
Lingbo Liu, Eric P Xing, and Liang Lin. 2021.
Geoqa: A geometric question answering benchmark
towards multimodal numerical reasoning. _arXiv_
_preprint arXiv:2105.14517._
Zhe Chen, Jiannan Wu, Wenhai Wang, Weijie Su,
Guo Chen, Sen Xing, Zhong Muyan, Qinglong
points tested (the sum of the interior angles of a
triangle is 180[◦]), and tests a similar knowledge
point (determine the shape of a triangle based on
its angles) by modifying the problem scenario. This
demonstrates that the symbolic experience, especially graph knowledge injected in the first stage
helps the model find similar knowledge points
based on the given knowledge point.
For Qwen-VL-Chat, the problem it generates,
there is a discrepancy between the idea of the problem and the content of the problem, and the generated problem has not been tested for data rationality
and does not conform to logic (it is paradoxical that
there are two angles of 40[◦] and 65[◦] in a right triangle).
On the other hand, Yi-VL-34B correctly interprets the meaning of the seed problem, analyzes the
knowledge points tested, and modifies the problem
scenario by adding conditions. However, the quality of its problem can be further improved because
it introduces an invalid condition (the length of BC
is meaningless for solving ∠B). Although there
is no logical issue with this constructed problem,
compared to our model’s response, the problem
constructed by our model is more reasonable.
**6** **Conclusion**
In this work, we propose COMET, a “Cone of Experience” enhanced large multimodal model for
mathematical problem generation. Inspired by the
“Cone of Experience” theory, we follow the growth
process of teachers to define the experience as sym
-----
Zhang, Xizhou Zhu, Lewei Lu, et al. 2023. Internvl:
Scaling up vision foundation models and aligning
for generic visual-linguistic tasks. arXiv preprint
_arXiv:2312.14238._
Bryan R Christ, Jonathan Kropko, and Thomas
Hartvigsen. 2024. Mathwell: Generating educational math word problems at scale. arXiv preprint
_arXiv:2402.15861._
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, et al. 2021. Training verifiers to solve math
word problems. arXiv preprint arXiv:2110.14168.
Edgar Dale. 1947. Audio-visual materials. Air Aff.,
2:179.
Leonardo De Moura, Soonho Kong, Jeremy Avigad,
Floris Van Doorn, and Jakob von Raumer. 2015. The
lean theorem prover (system description). In Auto_mated Deduction-CADE-25: 25th International Con-_
_ference on Automated Deduction, Berlin, Germany,_
_August 1-7, 2015, Proceedings 25, pages 378–388._
Springer.
Iddo Drori, Sarah Zhang, Reece Shuttleworth, Leonard
Tang, Albert Lu, Elizabeth Ke, Kevin Liu, Linda
Chen, Sunny Tran, Newman Cheng, et al. 2022. A
neural network solves, explains, and generates university math problems by program synthesis and fewshot learning at human level. Proceedings of the Na_tional Academy of Sciences, 119(32):e2123433119._
Arpad E Elo. 1967. The proposed uscf rating system,
its development, theory, and applications. Chess Life,
XXII(8):242–247.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yujiu Yang,
Minlie Huang, Nan Duan, Weizhu Chen, et al.
2023. Tora: A tool-integrated reasoning agent
for mathematical problem solving. arXiv preprint
_arXiv:2309.17452._
Michael Heilman. 2011. Automatic factual question
_generation from text. Ph.D. thesis, Carnegie Mellon_
University.
Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan
Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang,
and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. _arXiv preprint_
_arXiv:2106.09685._
Shima Imani, Liang Du, and Harsh Shrivastava. 2023.
Mathprompter: Mathematical reasoning using large
language models. arXiv preprint arXiv:2303.05398.
Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang
Niu, Lei Zhang, Baochang Ma, and Xiangang Li.
2023. Exploring the impact of instruction data
scaling on large language models: An empirical
study on real-world use cases. _arXiv preprint_
_arXiv:2303.14742._
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and
Regina Barzilay. 2014. Learning to automatically
solve algebra word problems. In Proceedings of the
_52nd Annual Meeting of the Association for Compu-_
_tational Linguistics (Volume 1: Long Papers), pages_
271–281.
Jierui Li, Lei Wang, Jipeng Zhang, Yan Wang, Bing Tian
Dai, and Dongxiang Zhang. 2019. Modeling intrarelation in math word problems with different functional multi-head attentions. In Proceedings of the
_57th annual meeting of the association for computa-_
_tional linguistics, pages 6162–6167._
Zhang Li, Biao Yang, Qiang Liu, Zhiyin Ma, Shuo
Zhang, Jingxu Yang, Yabo Sun, Yuliang Liu, and
Xiang Bai. 2023. Monkey: Image resolution and
text label are important things for large multi-modal
models. arXiv preprint arXiv:2311.06607.
Chin-Yew Lin. 2004. Rouge: A package for automatic
evaluation of summaries. In Text summarization
_branches out, pages 74–81._
Shir Lissak, Nitay Calderon, Geva Shenkman, Yaakov
Ophir, Eyal Fruchter, Anat Brunstein Klomek, and
Roi Reichart. 2024. The colorful future of llms: Evaluating and improving llms as emotional supporters
for queer youth. arXiv preprint arXiv:2402.11886.
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae
Lee. 2024a. Visual instruction tuning. Advances in
_neural information processing systems, 36._
Meilu Liu, Lawrence Jun Zhang, and Christine
Biebricher. 2024b. Investigating students’ cognitive
processes in generative ai-assisted digital multimodal
composing and traditional writing. Computers &
_Education, 211:104977._
Qianying Liu, Wenyv Guan, Sujian Li, and Daisuke
Kawahara. 2019. Tree-structured decoding for solving math word problems. In Proceedings of the 2019
_conference on empirical methods in natural language_
_processing and the 9th international joint conference_
_on natural language processing (EMNLP-IJCNLP),_
pages 2370–2379.
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wizardmath: Empowering mathematical reasoning for
large language models via reinforced evol-instruct.
_arXiv preprint arXiv:2308.09583._
Kumaresh Nandhini and Sadhu Ramakrishnan Balasundaram. 2011. Math word question generation
for training the students with learning difficulties.
In Proceedings of the International Conference &
_Workshop on Emerging Trends in Technology, pages_
206–211.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida,
Carroll Wainwright, Pamela Mishkin, Chong Zhang,
Sandhini Agarwal, Katarina Slama, Alex Ray, et al.
-----
2022. Training language models to follow instructions with human feedback. Advances in neural in_formation processing systems, 35:27730–27744._
Kishore Papineni, Salim Roukos, Todd Ward, and Wei[Jing Zhu. 2002. Bleu: a method for automatic evalu-](https://doi.org/10.3115/1073083.1073135)
[ation of machine translation. In Proceedings of the](https://doi.org/10.3115/1073083.1073135)
_40th Annual Meeting of the Association for Compu-_
_tational Linguistics, pages 311–318, Philadelphia,_
Pennsylvania, USA. Association for Computational
Linguistics.
Oleksandr Polozov, Eleanor O’Rourke, Adam M
Smith, Luke Zettlemoyer, Sumit Gulwani, and Zoran Popovi´c. 2015. Personalized mathematical word
problem generation. In Twenty-Fourth International
_Joint Conference on Artificial Intelligence._
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn.
2024. Direct preference optimization: Your language
model is secretly a reward model. Advances in Neu_ral Information Processing Systems, 36._
Samyam Rajbhandari, Jeff Rasley, Olatunji Ruwase,
and Yuxiong He. 2020. Zero: Memory optimizations
toward training trillion parameter models. In SC20:
_International Conference for High Performance Com-_
_puting, Networking, Storage and Analysis, pages 1–_
16. IEEE.
Subhro Roy and Dan Roth. 2016. Solving general arithmetic word problems. _arXiv preprint_
_arXiv:1608.01413._
Subhro Roy, Tim Vieira, and Dan Roth. 2015. Reasoning about quantities in natural language. Transac_tions of the Association for Computational Linguis-_
_tics, 3:1–13._
Rohit Singh, Sumit Gulwani, and Sriram Rajamani.
2012. Automatically generating algebra problems.
In Proceedings of the AAAI Conference on Artificial
_Intelligence, volume 26, pages 1620–1628._
Tianxiang Sun, Xiaotian Zhang, Zhengfu He, Peng Li,
Qinyuan Cheng, Hang Yan, Xiangyang Liu, Yunfan
Shao, Qiong Tang, Xingjian Zhao, et al. 2023. Moss:
Training conversational language models from synthetic data. arXiv preprint arXiv:2307.15020, 7.
Hanchen Wang, Tianfan Fu, Yuanqi Du, Wenhao
Gao, Kexin Huang, Ziming Liu, Payal Chandak,
Shengchao Liu, Peter Van Katwyk, Andreea Deac,
et al. 2023a. Scientific discovery in the age of artificial intelligence. Nature, 620(7972):47–60.
Lei Wang, Yan Wang, Deng Cai, Dongxiang Zhang,
and Xiaojiang Liu. 2018. Translating a math word
problem to an expression tree. _arXiv preprint_
_arXiv:1811.05632._
Weihan Wang, Qingsong Lv, Wenmeng Yu, Wenyi
Hong, Ji Qi, Yan Wang, Junhui Ji, Zhuoyi Yang,
Lei Zhao, Xixuan Song, et al. 2023b. Cogvlm: Visual expert for pretrained language models. arXiv
_preprint arXiv:2311.03079._
Yan Wang, Xiaojiang Liu, and Shuming Shi. 2017.
Deep neural solver for math word problems. In Pro_ceedings of the 2017 conference on empirical meth-_
_ods in natural language processing, pages 845–854._
Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu,
Lijie Wang, Haihua Yang, Biye Li, Cheng Cheng,
Weiwei Lü, Rui Hu, et al. 2023. Skywork: A more
open bilingual foundation model. _arXiv preprint_
_arXiv:2310.19341._
Qinzhuo Wu, Qi Zhang, and Xuanjing Huang. 2022.
Automatic math word problem generation with topicexpression co-attention mechanism and reinforcement learning. IEEE/ACM Transactions on Audio,
_Speech, and Language Processing, 30:1061–1072._
Zhen Yang, Ming Ding, Qingsong Lv, Zhihuan Jiang,
Zehai He, Yuyi Guo, Jinfeng Bai, and Jie Tang. 2023.
Gpt can solve mathematical problems without a calculator. arXiv preprint arXiv:2309.03241.
Alex Young, Bei Chen, Chao Li, Chengen Huang,
Ge Zhang, Guanwei Zhang, Heng Li, Jiangcheng
Zhu, Jianqun Chen, Jing Chang, et al. 2024. Yi:
Open foundation models by 01.ai. arXiv preprint
_arXiv:2403.04652._
Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen.
2023. Mammoth: Building math generalist models
through hybrid instruction tuning. arXiv preprint
_arXiv:2309.05653._
Daochen Zha, Kwei-Herng Lai, Fan Yang, Na Zou,
Huiji Gao, and Xia Hu. 2023. Data-centric ai: Techniques and future perspectives. In Proceedings of
_the 29th ACM SIGKDD Conference on Knowledge_
_Discovery and Data Mining, pages 5839–5840._
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric Xing, et al. 2024.
Judging llm-as-a-judge with mt-bench and chatbot
arena. Advances in Neural Information Processing
_Systems, 36._
Aojun Zhou, Ke Wang, Zimu Lu, Weikang Shi, Sichun
Luo, Zipeng Qin, Shaoqing Lu, Anya Jia, Linqi
Song, Mingjie Zhan, et al. 2023. Solving challenging
math word problems using gpt-4 code interpreter
with code-based self-verification. _arXiv preprint_
_arXiv:2308.07921._
Qingyu Zhou and Danqing Huang. 2019. Towards generating math word problems from equations and topics. In Proceedings of the 12th international confer_ence on natural language generation, pages 494–503._
Mingyu Zong and Bhaskar Krishnamachari. 2023. Solving math word problems concerning systems of equations with gpt-3. In Proceedings of the AAAI Con_ference on Artificial Intelligence, volume 37, pages_
15972–15979.
-----
| [
"Sannyuya, Liu",
"Jintian, Feng",
"Zongkai, Yang",
"Yawei, Luo",
"Qian, Wan",
"Xiaoxuan, Shen",
"Jianwen, Sun"
] | 2024-07-15T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2407.11315 | https://arxiv.org/abs/2407.11315 | https://www.semanticscholar.org/paper/5149e14a9a3e6336190e9369fd09df71a1eca886 |
CPL: Critical Planning Step Learning Boosts LLM Generalization in Reasoning Tasks | Post-training large language models (LLMs) to develop reasoning capabilities has proven effective across diverse domains, such as mathematical reasoning and code generation. However, existing methods primarily focus on improving task-specific reasoning but have not adequately addressed the model's generalization capabilities across a broader range of reasoning tasks. To tackle this challenge, we introduce Critical Planning Step Learning (CPL), which leverages Monte Carlo Tree Search (MCTS) to explore diverse planning steps in multi-step reasoning tasks. Based on long-term outcomes, CPL learns step-level planning preferences to improve the model's planning capabilities and, consequently, its general reasoning capabilities. Furthermore, while effective in many scenarios for aligning LLMs, existing preference learning approaches like Direct Preference Optimization (DPO) struggle with complex multi-step reasoning tasks due to their inability to capture fine-grained supervision at each step. We propose Step-level Advantage Preference Optimization (Step-APO), which integrates an advantage estimate for step-level preference pairs obtained via MCTS into the DPO. This enables the model to more effectively learn critical intermediate planning steps, thereby further improving its generalization in reasoning tasks. Experimental results demonstrate that our method, trained exclusively on GSM8K and MATH, not only significantly improves performance on GSM8K (+10.5%) and MATH (+6.5%), but also enhances out-of-domain reasoning benchmarks, such as ARC-C (+4.0%), BBH (+1.8%), MMLU-STEM (+2.2%), and MMLU (+0.9%). | null | ## CPL: Critical Planning Step Learning Boosts LLM Generalization in Reasoning Tasks
**Tianlong Wang[1][,][2][∗], Xueting Han[2][†], Jing Bai[2]**
1
Peking University
2
Microsoft Research Asia
[email protected], chrihan, jbai @microsoft.com
_{_ _}_
Abstract
Post-training large language models (LLMs) to develop reasoning capabilities has proven effective across diverse domains, such as mathematical
reasoning and code generation. However, existing methods primarily focus
on improving task-specific reasoning, but have not adequately addressed
the model’s generalization capabilities across a broader range of reasoning
tasks. To tackle this challenge, we introduce Critical Planning Step Learning (CPL), which leverages Monte Carlo Tree Search (MCTS) to explore
diverse planning steps in multi-step reasoning tasks. Based on long-term
outcomes, CPL learns step-level planning preferences to improve the model’s
planning capabilities and, consequently, its general reasoning capabilities.
Furthermore, while effective in many scenarios for aligning LLMs, existing
preference learning approaches like Direct Preference Optimization (DPO)
struggle with complex multi-step reasoning tasks due to their inability to
capture fine-grained supervision at each step. We propose Step-level Advantage Preference Optimization (Step-APO), which integrates an advantage
estimate for step-level preference pairs obtained via MCTS into the DPO.
This enables the model to more effectively learn critical intermediate planning steps, thereby further improving its generalization in reasoning tasks.
Experimental results demonstrate that our method, trained exclusively on
GSM8K and MATH, not only significantly improves performance on GSM8K
(+10.5%) and MATH (+6.5%), but also enhances out-of-domain reasoning benchmarks, such as ARC-C (+4.0%), BBH (+1.8%), MMLU-STEM
(+2.2%), and MMLU (+0.9%).
1 Introduction
Recent studies focus on enhancing the reasoning capabilities of large language models (LLMs)
through various approaches, including collecting high-quality and domain-specific data
(Gunasekar et al., 2023; Shao et al., 2024; Dubey et al., 2024), designing elaborate prompting
techniques (Wei et al., 2023; Yao et al., 2023a;b), and developing advanced optimization
algorithms (Ouyang et al., 2022; Rafailov et al., 2023; Ethayarajh et al., 2024; Yuan et al.,
2023). Among these approaches, training on model-generated synthetic data is a promising
method. Specifically, recent work (Feng et al., 2023; Chen et al., 2024; Xie et al., 2024)
leverages Monte Carlo Tree Search (MCTS) (Kocsis & Szepesv´ari, 2006) to iteratively collect
reasoning paths to boost LLM’s reasoning capabilities.
MCTS strikes a balance between exploration and exploitation, utilizing its look-ahead ability
to obtain high-quality step-level supervision. However, a primary challenge with MCTS for
LLMs is the high inference latency and the vast search space, which limits the diversity
of explored reasoning paths. Additionally, existing methods primarily focus on enhancing
task-specific or domain-specific reasoning capabilities, such as for math or code. This has led
to significant improvements in specific tasks but has not adequately addressed the model’s
_∗Work is done during internship at Microsoft Research Asia._
_†Corresponding author._
-----
helps models develop more task-agnostic skills, leading to improved generalization. Our
(a) Impact of High-Quality Plans (b) In-domain Performance (c) Out-of-domain Performance
100
Mistral-7B DeepSeekMath-7B-Base DeepSeekMath-7B-Base
80 Mistral-7B (with plan from Qwen2-72B) CPL CPL
70 70.66 80 73.77 60 58.7960.54
60 63.23
50 60 56.06 54.93 54.70
40.71 53.84
Accuracy (%)4030 Accuracy (%) 40 35.18 41.64 Accuracy (%) 52.05 52.74
20 13.10 17.66 20 50
10
0 0
GSM8K MATH GSM8K MATH ARC-c BBH MMLU-stem MMLU
Figure 1: Impact of High-Quality Plans. (a): Mistral 7B benefits significantly from a well-crafted
plan provided by Qwen2-72B. Comparison between the DeepSeekMath-7B-Base model and our
CPL-trained model. (b) Our CPL method significantly outperforms the baselines in in-domain tasks.
(c) CPL also demonstrates substantial improvements in out-of-domain reasoning tasks, proving its
ability to generalize across a wider range of reasoning tasks.
generalization abilities across various reasoning tasks. For instance, AlphaMath (Chen et al.
2024) greatly boosts mathematical tasks, its performance on other reasoning tasks, such
as BBH (Suzgun et al., 2022) and ARC-C (Clark et al., 2018), did not show significant
improvement.
To improve transfer performance on a broader range of reasoning tasks. We propose that
effectively learning planning strategies to solve complex problems is crucial for improving
LLM’s reasoning capabilities and generalization. Approaches (Hao et al., 2023; Yao et al.
2023b) explore using LLMs to generate both reasoning traces and task-specific actions in an
interleaved manner to boost reasoning capabilities. We suggest that task-specific actions are
the execution steps that follow a plan and are often more closely tied to task-specific skills,
such as mathematical computation. In contrast, generating planning-based reasoning traces
preliminary experiments show that a weaker model benefits significantly from a well-crafted
plan provided by a more capable model (see Figure 1 (a)). This underscores that learning
effective planning is crucial for handling complex reasoning tasks.
Thus, we introduce Critical Planning Step Learning (CPL) to efficiently explore diverse
planning strategies via MCTS within the vast search space. This involves devising a step-bystep plan to solve the problem, with the final step providing the full solution based on the
plan and the final answer. This approach generates a plan tree, where high-quality planning
step preferences is obtained from the final result.
Preference learning approaches like Direct Preference Optimization (DPO) (Rafailov et al.,
2023) has proven effective for aligning LLMs. However, it struggles on complex multi-step
reasoning tasks, where the model often fails to identify erroneous steps and learn spurious
correlations from the flawed steps, ultimately hindering model generalization (Hwang et al.,
2024). Recent works propose Step-DPO (Setlur et al., 2024; Lai et al., 2024) to learn
step-level preferences in complex reasoning tasks. A key challenge with Step-DPO lies in
the vast search space of reasoning steps, where the selection of appropriate preference data
for model optimization is crucial. Current approaches often rely on heuristic methods, with
the most common strategy being to identify the first error step as dispreferred. However,
we argue that this approach fails to fully explore the step-level search space, limiting the
model’s optimization potential. To overcome this, we propose Step-level Advantage Preference
Optimization (Step-APO) to better leverage step preference data. By incorporating advantage
estimates between chosen and rejected plans from MCTS, Step-APO enables the model
to learn fine-grained preferences between plan steps, allowing it to identify critical plan
steps and de-emphasize erroneous ones. This further improves the generalization of LLM in
reasoning tasks.
We conduct extensive experiments on both in-domain and out-of-domain reasoning datasets.
Our results demonstrate that CPL significantly enhances the model’s overall reasoning
performance. Specifically, when trained exclusively on GSM8K and MATH, the model
-----
not only shows significant improvement in mathematical tasks including GSM8K(+10.5%)
and MATH(+6.5%), but also achieves better performance on out-of-domain benchmarks,
including ARC-C (+4.0%), BBH (+1.8%), MMLU-STEM (+2.2%), and MMLU (+0.9%).
To conclude, our work makes the following contributions: 1) We introduce CPL, which
leverages MCTS to explore planning steps and learn step-level planning preferences, enhancing
the model’s general reasoning capabilities. 2) We introduce Step-APO to further enhance the
learning of critical planning steps. 3) We achieve significant improvements in both in-domain
and out-of-domain tasks.
2 Related Work
**Search-Guided Reasoning in LLMs Recent advancements (Feng et al., 2023; Chen et al.,**
2024; Xie et al., 2024) in enhancing LLM reasoning capabilities have focused on integrating
Monte Carlo Tree Search (MCTS) to collect trajectories and train models, resulting in notable
advancements for reasoning tasks. For example, AlphaMath Chen et al. (2024) employs
MCTS to automatically generate process supervision, leading to significant improvements in
mathematical reasoning. However, these MCTS-based training methods encounter challenges
such as vast search spaces, limited solution diversity for LLMs. Furthermore, there is
limited research on how these methods generalize to other reasoning tasks and enhance
overall reasoning capabilities. To address these issues, we propose a method for searching
over plan steps and learning critical plan steps for problem-solving, which aims to enhance
generalization across a range of reasoning tasks.
**Direct Preference Optimization (DPO) Algorithms DPO (Rafailov et al., 2023) uses**
solution-level preference data for model optimization but has notable limitations. It struggles
with multi-step reasoning tasks because it cannot effectively correct specific errors within
the reasoning process (Hwang et al., 2024). Moreover, training on model-generated positive
data can amplify spurious correlations from incorrect intermediate steps, leading to poor
generalization (Setlur et al., 2024). Recent work proposes step-level DPO (Setlur et al., 2024;
Lai et al., 2024) to address these issues by providing the fine-grained error identification
needed for improving reasoning capabilities. For example, SELF-EXPLORE Hwang et al.
(2024) identifies the first incorrect step in a solution and constructs step-level preference
data to guide model improvement. Unlike these heuristic methods, we propose Step-APO to
fully explore the step-level search space and achieve the maximum optimization potential.
3 Methods
Our Critical Planning Step Learning (CPL) framework is illustrated in Figure 2. In this
section, we first introduce our planning based MCTS, which enables the LLM to learn critical
planning steps. Next, we present our Step-APO in detail to further explore the potential
of step-level preference learning in multi-step reasoning task. Finally, we describe how we
iteratively optimize the policy model and value model.
3.1 Critical Planning Step Learning with MCTS
MCTS builds a reasoning tree iteratively and autonomously explores step-level reasoning
traces, which can be used to optimize LLMs. Existing methods (Chen et al., 2024; Xie
et al., 2024) that leverage MCTS to collect data for training usually focus on exploring
solution steps within the entire search space or on simultaneously exploring both plans
and solutions. To improve transfer performance across a broader range of reasoning tasks,
we propose learning effective and diverse planning, which enables the model to acquire
more task-agnostic capabilities and thereby achieve better generalization. We first create a
step-by-step plan to solve the problem, with the final step presenting the full solution and
final answer based on the plan. The prompt is provided in the Appendix A.1. Ultimately,
we obtain a plan tree and high-quality planning step supervision through iterative search
simulations with MCTS (Figure 2).
-----
|𝜋 𝑎 |𝑠 𝑉 𝑠 𝜃 𝑡 𝑡 ∅ 𝑡+1 Policy Model Value Model|Col2|
|---|---|
|𝑳𝑺𝒕𝒆𝒑−𝑨𝑷𝑶 𝝅𝜽; 𝝅𝒓𝒆𝒇 𝑬 𝑽∅(𝒔𝒕+𝟏) −𝑽(𝒔𝒕+𝟏)𝟐||
|Policy & Value Model Training||
Question: The least common multiple of two integers
is 36 and 6 is their greatest common divisor. What is
the product of the two numbers?
Plan 1: Understand the concept of least Plan 1: Understand that the least
common multiple (LCM) and greatest common multiple (LCM) of two
common divisor (GCD)… numbers represents… 𝜋𝜃 𝑎𝑡|𝑠𝑡 𝑉∅ [𝑠]𝑡+1
**V=0.14** **V=0.12**
**1** product of two numbers can be Plan 2: Recognize that the **2** the relationship Plan 2: Apply **3GCD formula to calculate the Plan 2: Apply an LCM-** **Policy Model** **Value Model**
expressed as the product of their between LCM and product of the two numbers.
LCM and GCD, divided by the GCD to find the The formula is: LCM(a, b) *
GCD. This relationship holds true product of the two GCD(a, b) = a * b.
for any two numbers. numbers.
**…** **V=-0.51** **…** **V=0.53** **…** **V=0.63** **3** **>** **1** 𝑳𝑺𝒕𝒆𝒑−𝑨𝑷𝑶 𝝅𝜽; 𝝅𝒓𝒆𝒇 𝑬 𝑽∅(𝒔𝒕+𝟏) − 𝑽(𝒔[] 𝒕+𝟏) 𝟐
**2** **>** **1**
Solution: Solution: Solution:
1.. **…** 1.. **…** 1..
2.. 2.. 2..
The answer is 36 The answer is 216 The answer is 216
**V=-1** **V=1** **V=1**
Select
Planning Based MCTS Backup Policy & Value Model Training
Figure 2: CPL boosts model performance via iterative process over planning based MCTS and
step-level preference learning. Left: Example of an MCTS-generated plan tree, exploring diverse
planning strategies in the vast search space. CPL generates step-by-step plans, which lead to the
final solution and answer. State value V is updated via a bottom-up reward propagation from
the terminal node to the root, and used to assign preferences. Right: Step-level preferences from
MCTS are used to update the policy and value models. Our Step-APO integrates value estimates
for preference pairs into DPO, assigning different optimization weights to emphasize critical steps.
The value model is optimized using MSE loss.
Specifically, given the plan tree, each node represents a state st, and each edge represents
_T_
an action at, which corresponds to a reasoning step that leads to the next state st+1. Under
the same parent node, different sibling nodes form a set of step-level preference pairs, with
each node having its own value V (st) representing the expected future reward under state st.
These values can be obtained through the MCTS process, which involves four key operations:
selection, expansion, evaluation, and backup. To enhance efficiency, we use a value model to
estimate rewards for intermediate steps, with the final integration of both policy and value
models guiding the search process. Next, we describe the four steps of MCTS.
**Selection: We use the PUCT algorithm to guide the selection process with the following**
formula, where N represents the visit count:
_N_ (st)
arg max _Q(st, at) + cpuctπθ(at_ **st)** _._ (1)
**at** _|_ 1 + N (st, at)
p
**Expansion and Evaluation: During expansion, we sample multiple possible candidate**
actions for the next step. During evaluation, the final answer in the terminal action is
evaluated with the ground truth, otherwise, the value is predicted by the value model.
**Backup: Once a terminal node is reached, we perform a bottom-up update from the terminal**
node back to the root. We update the visit count N, the state value V, and the transition
value Q as follows:
_Q(st, at) ←_ _r(st, at) + V (st+1)_ (2)
_V (st) ←_ _N_ (st+1)Q(st, at)/ _N_ (st+1) (3)
_a_ _a_
X X
_N_ (st) _N_ (st) + 1. (4)
_←_
3.2 Step-APO
Unlike mainstream approaches (Hwang et al., 2024; Lai et al., 2024) that learn step-level
preferences by identifying the first error step and sampling a corresponding preferred step,
-----
while potentially yielding more accurate preferences, this method lacks sufficient exploration
of the vast reasoning trace space. Given the large variations in advantage differences
across different data pairs, we propose Step-APO, which introduces advantage estimates
for preference pairs into DPO. This enables the model to more effectively learn critical
intermediate planning steps, thereby further improving its reasoning capabilities. Next, We
will provide its derivation and analysis from the perspective of its gradient.
3.2.1 Preliminaries
**The Classical RL Objective RLHF approaches (Ziegler et al., 2020; Bai et al., 2022;**
Ouyang et al., 2022) usually first learn a reward function from human feedback, then
optimize it with a policy gradient-based method like PPO (Schulman et al., 2017) with an
entropy-bonus using the following multi-step RL objective:
max (r(st, at) + β log πref(at **st)) + β** (πθ) **s0** _ρ(s0)_ _,_ (5)
_πθ_ [E][a][t][∼][π][θ][(][·|][s][t][)] _t=0_ _|_ _H_ _|_ _∼_
X KL penalty
where r(st, at) denotes the step-level reward function, followed by a KL penalty that aims| {z }
to ensure the learned policy πθ does not deviate significantly from the reference policy πref.
_πref is typically produced via supervised fine-tuning._
**Direct Preference Optimization DPO (Rafailov et al., 2023) uses the well-known closed-**
form optimal solution, which establishes a mapping between the reward model and the
optimal policy under the KL divergence, obtaining the reward as:
_r(x, y) = β log π[∗](y|x) −_ _β log πref(y|x) −_ _Z(x),_ (6)
where x denotes the prompt and y denotes the response, π[∗] is the optimal policy and Z(x)
is the partition function that normalizes it. Substituting eq. (6) into the Bradley Terry
preference model, and leverage the maximum likelihood objective, DPO derives the loss:
DPO(πθ; πref) = E(x,yw,yl) log σ _β log_ _[π][θ][(][y][w][ |][ x][)]_ _,_ (7)
_L_ _−_ _∼D_ _πref(y[w]_ **x)** _πref(y[l]_ **x)**
_|_ _[−]_ _[β][ log][ π][θ][(][y][l][ |]|[ x][)]_
where σ denotes the logistic function, y[w] and y[l] denote the preferred and dis-preferred
responses to the prompt x.
3.2.2 Deriving the Step-APO Objective
In the general maximum entropy RL setting (Ziebart, 2010), the optimal policy π[∗](a **s) of**
_|_
multi-step RL objective in eq. (5) is:
_π[∗](at_ **st) = e[(][Q][∗][(][s][t][,][a][t][)][−][V][ ∗][(][s][t][))][/β],** (8)
_|_
where Q[∗](s, a) is the optimal Q-function which models the total future reward from (st, at)
under π[∗]. The optimal value function V estimates the total future reward under state st,
_[∗]_
and it’s a function of Q[∗] (Rafailov et al., 2024).
Under the reward r with a KL divergence penalty, the relationship between Q-function and
step-level reward function can be established with the Bellman equation as follows:
_Q[∗](st, at) = r(st, at) + β log πref(at_ **st) + V** (st+1). (9)
_|_ _[∗]_
By log-linearizing the optimal policy in eq. (8) and substituting in the Bellman equation
from eq. (9) (Nachum et al., 2017; Rafailov et al., 2024), we have below equation which is
precisely the optimal advantage function A[∗](s, a) = Q[∗](s, a) − _V_ _[∗](s):_
_β log_ _[π][∗][(][a][t][|][s][t][)]_ (10)
_πref(at|st) [=][ r][(][s][t][,][ a][t][) +][ V][ ∗][(][s][t][+1][)][ −]_ _[V][ ∗][(][s][t][)][.]_
Unlike DPO utilize response-level Bradley Terry model, we introduce step-level Bradley
Terry preference model to learn fine-grained step-level preference:
exp (r(s, a[w]))
_p[∗](a[w]_ **a[l]** **s) =** (11)
_⪰_ _|_ exp (r(s, a[w])) + exp (r(s, a[l])) _[.]_
-----
By substituting eq. (10) into eq. (11) and leveraging the negative log-likelihood loss, we
derive the objective for step-APO:
_t_
_LStep-APO(πθ; πref) = −E(st,atw[,][a]t[l]_ [)][∼D] log σ _β log_ _π[π]ref[θ][(]([a]a[w]t[w][|][ s][t][)]_ _t+1[)]_
_[|][ s][t][) +][ V][ (][s][t][)][ −]_ _[V][ (][s][w]_
_β log_ _[π][θ][(][a]t[l]_ _[|][ s][t][)]_ _t+1[)]_
_−_ _πref(at[l]_
_[|][ s][t][) +][ V][ (][s][t][)][ −]_ _[V][ (][s][l]_
_t_
= −E(st,atw[,][a]t[l] [)][∼D] log σ _β log_ _π[π]ref[θ][(]([a]a[w]t[w][|][ s][t][)]_ _t+1[)]_
_[|][ s][t][)][ −]_ _[V][ (][s][w]_
_β log_ _[π][θ][(][a]t[l]_ _[|][ s][t][)]_ _t+1[)]_ _._ (12)
_−_ _πref(at[l]_
_[|][ s][t][) +][ V][ (][s][l]_
where V (s[w]t+1[)][ −] _[V][ (][s]t[l]+1[) denotes the advantage of][ s]t[w]+1_ [to][ s]t[l]+1 [from the same start state.]
To understand the difference between our step-APO and other step-DPO, we will analyze
the gradient of the LStep-APO:
_θ_ Step-APO(πθ; πref) = _βE(st,atw[,][a]t[l]_ [)][∼D] _σ_ _rˆθ(st, at[l][)][ −]_ _r[ˆ]θ(st, at[w][)]_
_∇_ _L_ _−_
+ V (s[w]t+1[)][ −] _[V][ (][s]t[l]+1[)]_ _∇θ log π(at[w]_ _[|][ s][t][)][ −∇][θ]_ [log][ π][(][a]t[l] _[|][ s][t][)]_
[]
(13)
where ˆrθ(st, at) = β log _π[π]ref[θ][(]([a]a[t]t[|]|[s]s[t]t[)])_ [. Intuitively, the gradient of the loss function][ L][Step-APO]
increases the likelihood of the preferred completions at[w] [and decreases the likelihood of]
dispreferred completions at[l][. Importantly, besides the examples are weighed by how much]
higher the ˆrθ incorrectly orders the completions, the examples are also weighted by how
much higher the advantage of at[w] [is compared to][ a]t[l][. Our experiments prove the importance]
of this weighting.
3.3 Iterative Training of Policy and Value Model
As shown in Figure 2, our approach employs iterative training for policy and value models.
Our policy model πθ and value model vϕ are two separate models, both adapted from the
same base model. We add a value head for the value model, which is randomly initialized in
the first round. However, as the MCTS simulations proceed in the first round, rewards from
terminal nodes are back-propagated to the intermediate nodes, reducing the negative impact
of the random value initialization.
For policy model training, we first supervised fine-tune (SFT) it using collected correct paths,
then apply our Step-APO (Eq. 8) using collected step-level preference data both from MCTS.
Notably, V (s[w]t+1[) and][ V][ (][s]t[l]+1[) in eq.8, obtained from MCTS, represent the values of the]
corresponding states. The difference between these values reflects the advantage difference
of the two actions under the same previous state st:
_A(st, at[w][)][ −]_ _[A][(][s][t][,][ a]t[l][) =][ Q][(][s][t][,][ a]t[w][)][ −]_ _[V][ (][s][t][)][ −]_ [(][Q][(][s][t][,][ a]t[l][)][ −] _[V][ (][s][t][)) =][ V][ (][s][w]t+1[)][ −]_ _[V][ (][s]t[l]+1[)][.][ (14)]_
For value model optimization, we sue a mean squared error (MSE) loss between the value
model’s predict and values from MCTS. With the updated policy and value models, we can
advance to the next-round MCTS, iterating this training process to enhance the models.
4 Experiments
4.1 Implementation Details
We iteratively generate data via MCTS and train our policy and value models in two rounds.
In each round, planning steps and final solution steps are generated by the policy model using
-----
MCTS to solve the given problem. The value model was employed to assist in evaluating the
step plans during MCTS. At the end of each round, the generated data was used to train
both the policy model and the value model.
**Model Architecture We utilize the DeepSeekMathBase-7B (Shao et al., 2024) as our initial**
policy model and add a randomly initialized value head to this model, serving as the initial
value model. We then optimize these two separate models and use the updated models for
the next round of data generation.
**Datasets We construct our training data using the GSM8K (Cobbe et al., 2021) and MATH**
(Hendrycks et al., 2021b) datasets. The GSM8K dataset consists of 7,473 training and 1,319
test problems, while the MATH dataset contains 7,500 training and 5,000 test problems.
From these datasets, we exclusively extracted question-answer pairs from the training sets of
GSM8K and MATH, omitting the human-annotated solution. This resulted in a total of 15k
question-answer pairs to construct our training data.
**Training Data Generation via MCTS For each problem, we utilize MCTS to generate**
multiple step-level plans and final solutions. In the first round, we generate data from a
subset of 5k question-answer pairs, consisting of 4k from the MATH and 1k from GSM8K,
for efficiency. We carefully design prompts and 2-shot demonstrations to guide the model’s
output, see Appendix A.1 for details. MCTS is run for 200 simulations in this phase to
mitigate the impact of the random initialization of the value model. Starting from the
second round, with the fine-tuned model from the first round, we use the full set of 15k
question-answer pairs for data generation. A 2-shot prompt formatted in XML is used, and
MCTS is executed for 100 simulations. During the MCTS expansion phase, we expanded 5
child nodes for the root node and 3 child nodes for other nodes. We apply a temperature of
0.7 to encourage diverse generation.
We list the statistic for the generated data in two rounds in Table 1. For plan step
preference data, we categorize sibling nodes as ”preferred” if their value is greater than 0 and
”dispreferred” if their value is less than 0, forming preference pairs from any combination. For
the final solution step data, we only create pairs between the max value (> 0) and the min
value (< 0). This is based on our experimental findings that an excess of solution data can
negatively impact the performance on out-of-domain reasoning tasks, whereas increasing the
emphasis on planning data improves performance in both mathematical and other reasoning
tasks. Table 1 shows that Round 2 generates more correct responses, indicating a stronger
policy and value model.
Table 1: Statistic for the generated data in two rounds
**Round Num** **Avg Depths** **Pos:Neg** **Plan Pairs Count** **Solution Pairs Count**
Round 1 4.18 1:3.16 18742 16506
Round 2 3.80 1:1.23 24707 24633
**Training Details For the policy model, we first randomly select up to four correct responses**
per problem for supervised fine-tuning (SFT), resulting in approximately 50k SFT data
per round. Next, we use step-level preference data from MCTS to train the model with
our Step-APO algorithm. The statistics for preference data are listed in Table 1. For the
value model, we use values from MCTS for partial solutions as labels to update the model.
This allows the value model to score both partial plans and complete responses. The SFT
data for the value model consists of approximately 200k examples in Round 1. The training
hyperparameters are provided in the Appendix A.2. Notably, because the value difference
for final solution step preference pairs is 2, while the value difference for other plan steps
ranges between 0.6 and 0.8, we apply a scaling factor of 0.3 to the values of solution steps.
In the second round of training, we use the data from the second round to train the base
model, rather than the Round 1 model.
-----
4.2 Evaluation Reasoning Tasks
We evaluate our method on both mathematical tasks and other out-of-domain reasoning
tasks.
**Mathematical tasks. We evaluate our in-domain capabilities on MATH and GSM8K test**
set in a zero-shot setting. We use vLLM (Kwon et al., 2023) for inference during evaluation
and the math evaluation toolkit by Zhang et al. (2024).
**Out-of-domain reasoning tasks. We select three benchmarks for evaluating out-of-domain**
reasoning: BIG-Bench Hard (BBH) (Suzgun et al., 2022), ARC-C (Clark et al., 2018), and
MMLU-STEM (MMLU) (Hendrycks et al., 2021a). BBH consists of 23 challenging tasks
requiring multi-step reasoning, designed to test capabilities beyond the performance of
language models, particularly in cases where traditional few-shot methods underperform.
ARC-C focuses on commonsense reasoning and complex science-related questions, posing
a significant challenge for models to handle nuanced scientific concepts. MMLU-STEM
is a subset of the MMLU benchmark, covering subjects like mathematics, physics, and
engineering, aiming to assess the model’s performance in STEM disciplines. We employ
few-shot prompting using lm-evaluation-harness (Gao et al., 2024) for evaluation on these
benchmarks.
4.3 Results on Mathematical Tasks
As shown in Table 2, our CPL significantly boosts performance on in-domain tasks. In both
rounds, Step-APO consistently improves results over the SFT significantly. Additionally,
Round 2 outperforms Round 1 in both SFT and Step-APO, demonstrating that the updated
policy and value models generate better data through MCTS, further improving performance.
Table 2: Model performance on MATH and GSM8K, DeepSeekMath-Base results are reproduced,
with the originally reported numbers in parentheses. Best results are bolded.
**Model** **GSM8K** **MATH**
DeepSeekMath-Base (4-shot) 63.23(64.20) 35.18(36.20)
CPL(Round 1 SFT) 63.79 36.3
CPL(Round 1 Step-APO) 71.06 40.56
CPL(Round 2 SFT) 69.75 39.16
CPL(Round 2 Step-APO) **73.77** **41.64**
4.4 Results on Out-of-domain Reasoning Tasks
From Table 3, we can see that our approach also achieves significant improvements on OOD
tasks, demonstrating that CPL enhances the model’s generalization ability across diverse
reasoning tasks. Compared to AlphaMath, which was trained on the same 15k dataset using
the REACT format for 3 round, our performance on these OOD reasoning tasks is noticeably
better. Notably, AlphaMath even shows a decrease in performance on certain tasks, such as
a 2.2 drop in BBH.
4.5 Advantage of Planning-based Learning
In our preliminary experiments, we aim to verify whether planning-based learning outperforms
solution-based learning on OOD tasks. We conducted these experiments on the GSM8K
and MATH training set and evaluated on BBH. Specifically, we compared CoT-formatted
SFT with our planning-based prompt SFT, both of which were fine-tuned using model
self-generated data, filtered based on the correctness of the answer. In this experiment, we
sampled only one response per problem for training. The results in Table 4 demonstrate
that planning-based learning enhances performance on BBH, whereas CoT SFT does not
show significant improvements.
-----
Table 3: Model performance on out-of-domain tasks. Best results are bolded. We use 25-shot for
ARC-C, 3-shot CoT for BBH, 5-shot for MMLU-STEM (MMLU).
**Model** **ARC-C** **BBH** **MMLU-STEM (MMLU)**
DeepSeekMath-Base 52.05 58.79 52.74(53.84)
AlphaMath 53.41 56.63 **55.31(54.55)**
CPL(Round 1 SFT) 54.44 59.68 54.58(54.22)
CPL(Round 1 Step-APO) 55.55 60.18 55.15(54.66)
CPL(Round 2 SFT) 54.95 59.93 55.44(54.44)
CPL(Round 2 Step-APO) **56.06** **60.54** 54.93(54.70)
Table 4: Advantage of Planning-base Learning
**Model** **BBH(3-shot CoT)**
DeepSeekMath-Base 58.79
CoT SFT 58.92
Planning-based Learning SFT **59.5**
4.6 Advantage of Step-APO
We aim to analyze the advantages of Step-APO over non-advantage integrated step-DPO.
Our experiments reveal that Step-APO achieves superior performance on both in-domain
and out-of-domain tasks. This demonstrates that our method, by reinforcing important steps
through preference learning, leads to more effective model optimization.
Table 5: Advantage of Step-APO
**Model** **GSM8K** **MATH** **ARC-C** **BBH** **MMLU-STEM(MMLU)**
DeepSeekMath-Base 63.23 35.18 52.05 58.79 52.74(53.84)
Round 2 SFT 69.75 39.16 54.95 59.93 **55.44(54.44)**
Round 2 Step-DPO 72.80 41.47 55.63 60.40 55.20(54.68)
Round 2 Step-APO **73.77** **41.64** **56.06** **60.54** 54.93(54.70)
5 Conclusion
In this work, we propose that learning planning can improve a model’s reasoning and
generalization capabilities. By focusing on finer-grained learning of plan step preferences
through our Step-APO, the model can identify critical planning steps within the reasoning
trace, further enhancing its reasoning ability. Although we trained on GSM8K and MATH
data, our approach has demonstrated general improvements on other reasoning tasks such as
BBH, ARC-C, and MMLU-STEM.
Finding an effective way to improve transfer performance to more reasoning tasks and
enhance overall model generalization in reasoning remains an open and important research
question that has yet to be fully addressed. We believe that learning the critical planning
steps for solving a problem is crucial for enhancing the model’s reasoning capabilities. At
the same time, the relative advantages between these planning steps are important for
optimization. Additionally, the diversity of preference data is essential for learning various
planning strategies. In future work, we plan to explore the application of our method to
other types of data, such as code. Additionally, we will continue to refine our approach,
exploring various improvements such as enhancing the diversity of planning steps to better
capture a broader range of planning step preferences.
-----
References
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy
Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022.
Guoxin Chen, Minpeng Liao, Chengxi Li, and Kai Fan. Alphamath almost zero: process
supervision without process. CoRR, abs/2405.03553, 2024. doi: 10.48550/ARXIV.2405.
[03553. URL https://doi.org/10.48550/arXiv.2405.03553.](https://doi.org/10.48550/arXiv.2405.03553)
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick,
and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning
[challenge, 2018. URL https://arxiv.org/abs/1803.05457.](https://arxiv.org/abs/1803.05457)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and
John Schulman. Training verifiers to solve math word problems. CoRR, abs/2110.14168,
[2021. URL https://arxiv.org/abs/2110.14168.](https://arxiv.org/abs/2110.14168)
Tri Dao. Flashattention-2: Faster attention with better parallelism and work partitioning.
_[CoRR, abs/2307.08691, 2023. doi: 10.48550/ARXIV.2307.08691. URL https://doi.org/](https://doi.org/10.48550/arXiv.2307.08691)_
```
10.48550/arXiv.2307.08691.
```
Abhimanyu Dubey, Abhinav Jauhri, Abhinav Pandey, Abhishek Kadian, Ahmad Al-Dahle,
Aiesha Letman, Akhil Mathur, Alan Schelten, et al. The llama 3 herd of models, 2024.
[URL https://arxiv.org/abs/2407.21783.](https://arxiv.org/abs/2407.21783)
Kawin Ethayarajh, Winnie Xu, Niklas Muennighoff, Dan Jurafsky, and Douwe Kiela. Kto:
[Model alignment as prospect theoretic optimization, 2024. URL https://arxiv.org/](https://arxiv.org/abs/2402.01306)
```
abs/2402.01306.
```
Xidong Feng, Ziyu Wan, Muning Wen, Ying Wen, Weinan Zhang, and Jun Wang. Alphazerolike tree-search can guide large language model decoding and training. arXiv preprint
_arXiv:2309.17179, 2023._
Leo Gao, Jonathan Tow, Baber Abbasi, Stella Biderman, Sid Black, Anthony DiPofi, Charles
Foster, Laurence Golding, Jeffrey Hsu, Alain Le Noac’h, Haonan Li, Kyle McDonell,
Niklas Muennighoff, Chris Ociepa, Jason Phang, Laria Reynolds, Hailey Schoelkopf,
Aviya Skowron, Lintang Sutawika, Eric Tang, Anish Thite, Ben Wang, Kevin Wang,
and Andy Zou. A framework for few-shot language model evaluation, 07 2024. URL
```
https://zenodo.org/records/12608602.
```
Suriya Gunasekar, Yi Zhang, Jyoti Aneja, Caio C´esar Teodoro Mendes, Allie Del Giorno,
Sivakanth Gopi, Mojan Javaheripi, Piero Kauffmann, Gustavo de Rosa, Olli Saarikivi,
Adil Salim, Shital Shah, Harkirat Singh Behl, Xin Wang, S´ebastien Bubeck, Ronen Eldan,
Adam Tauman Kalai, Yin Tat Lee, and Yuanzhi Li. Textbooks are all you need, 2023.
[URL https://arxiv.org/abs/2306.11644.](https://arxiv.org/abs/2306.11644)
Shibo Hao, Yi Gu, Haodi Ma, Joshua Hong, Zhen Wang, Daisy Wang, and Zhiting Hu.
Reasoning with language model is planning with world model. In Houda Bouamor,
Juan Pino, and Kalika Bali (eds.), Proceedings of the 2023 Conference on Empirical
_Methods in Natural Language Processing, pp. 8154–8173, Singapore, December 2023._
Association for Computational Linguistics. doi: 10.18653/v1/2023.emnlp-main.507. URL
```
https://aclanthology.org/2023.emnlp-main.507.
```
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and
Jacob Steinhardt. Measuring massive multitask language understanding, 2021a. URL
```
https://arxiv.org/abs/2009.03300.
```
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang,
Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with
the MATH dataset. In Joaquin Vanschoren and Sai-Kit Yeung (eds.), NeurIPS,
[2021b. URL https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/](https://datasets-benchmarks-proceedings.neurips.cc/paper/2021/hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html)
```
hash/be83ab3ecd0db773eb2dc1b0a17836a1-Abstract-round2.html.
```
-----
Hyeonbin Hwang, Doyoung Kim, Seungone Kim, Seonghyeon Ye, and Minjoon Seo. Selfexplore to avoid the pit: Improving the reasoning capabilities of language models with
[fine-grained rewards, 2024. URL https://arxiv.org/abs/2404.10346.](https://arxiv.org/abs/2404.10346)
Levente Kocsis and Csaba Szepesv´ari. Bandit based monte-carlo planning. In European
_conference on machine learning, pp. 282–293. Springer, 2006._
Woosuk Kwon, Zhuohan Li, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Hao Yu,
Joseph E. Gonzalez, Hao Zhang, and Ion Stoica. Efficient memory management for large
[language model serving with pagedattention, 2023. URL https://arxiv.org/abs/2309.](https://arxiv.org/abs/2309.06180)
```
06180.
```
Xin Lai, Zhuotao Tian, Yukang Chen, Senqiao Yang, Xiangru Peng, and Jiaya Jia. Step-dpo:
Step-wise preference optimization for long-chain reasoning of llms. arXiv:2406.18629, 2024.
Ofir Nachum, Mohammad Norouzi, Kelvin Xu, and Dale Schuurmans. Bridging the gap
between value and policy based reinforcement learning, 2017.
Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin,
Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton,
Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F.
Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions
[with human feedback. In NeurIPS, 2022. URL http://papers.nips.cc/paper_files/](http://papers.nips.cc/paper_files/paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html)
```
paper/2022/hash/b1efde53be364a73914f58805a001731-Abstract-Conference.html.
```
Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D. Manning, Stefano Ermon,
and Chelsea Finn. Direct preference optimization: Your language model is secretly a
[reward model. In NeurIPS, 2023. URL http://papers.nips.cc/paper_files/paper/](http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html)
```
2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html.
```
Rafael Rafailov, Joey Hejna, Ryan Park, and Chelsea Finn. From r to q[∗]: Your language
[model is secretly a q-function. 2024. URL https://arxiv.org/abs/2404.12358.](https://arxiv.org/abs/2404.12358)
Samyam Rajbhandari, Olatunji Ruwase, Jeff Rasley, Shaden Smith, and Yuxiong He. Zeroinfinity: breaking the GPU memory wall for extreme scale deep learning. In Bronis R.
de Supinski, Mary W. Hall, and Todd Gamblin (eds.), International Conference for High
_Performance Computing, Networking, Storage and Analysis. ACM, 2021. doi: 10.1145/_
[3458817.3476205. URL https://doi.org/10.1145/3458817.3476205.](https://doi.org/10.1145/3458817.3476205)
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal
[policy optimization algorithms. CoRR, abs/1707.06347, 2017. URL http://arxiv.org/](http://arxiv.org/abs/1707.06347)
```
abs/1707.06347.
```
Amrith Setlur, Saurabh Garg, Xinyang Geng, Naman Garg, Virginia Smith, and Aviral
Kumar. Rl on incorrect synthetic data scales the efficiency of llm math reasoning by
[eight-fold, 2024. URL https://arxiv.org/abs/2406.14532.](https://arxiv.org/abs/2406.14532)
Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, YK Li,
Y Wu, and Daya Guo. Deepseekmath: Pushing the limits of mathematical reasoning in
open language models. arXiv preprint arXiv:2402.03300, 2024.
Mirac Suzgun, Nathan Scales, Nathanael Sch¨arli, Sebastian Gehrmann, Yi Tay, Hyung Won
Chung, Aakanksha Chowdhery, Quoc V. Le, Ed H. Chi, Denny Zhou, and Jason Wei.
Challenging big-bench tasks and whether chain-of-thought can solve them, 2022. URL
```
https://arxiv.org/abs/2210.09261.
```
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi,
Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language
[models, 2023. URL https://arxiv.org/abs/2201.11903.](https://arxiv.org/abs/2201.11903)
Yuxi Xie, Anirudh Goyal, Wenyue Zheng, Min-Yen Kan, Timothy P Lillicrap, Kenji
Kawaguchi, and Michael Shieh. Monte carlo tree search boosts reasoning via iterative
preference learning. arXiv preprint arXiv:2405.00451, 2024.
-----
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and
Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language
[models, 2023a. URL https://arxiv.org/abs/2305.10601.](https://arxiv.org/abs/2305.10601)
Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and
Yuan Cao. ReAct: Synergizing reasoning and acting in language models. In International
_Conference on Learning Representations (ICLR), 2023b._
Zheng Yuan, Hongyi Yuan, Chengpeng Li, Guanting Dong, Keming Lu, Chuanqi Tan, Chang
Zhou, and Jingren Zhou. Scaling relationship on learning mathematical reasoning with
[large language models, 2023. URL https://arxiv.org/abs/2308.01825.](https://arxiv.org/abs/2308.01825)
Boning Zhang, Chengxi Li, and Kai Fan. Mario eval: Evaluate your math llm with your
[math llm–a mathematical dataset evaluation toolkit, 2024. URL https://arxiv.org/](https://arxiv.org/abs/2404.13925)
```
abs/2404.13925.
```
Yaowei Zheng, Richong Zhang, Junhao Zhang, Yanhan Ye, Zheyan Luo, and Yongqiang
Ma. Llamafactory: Unified efficient fine-tuning of 100+ language models. _CoRR,_
[abs/2403.13372, 2024. doi: 10.48550/ARXIV.2403.13372. URL https://doi.org/10.](https://doi.org/10.48550/arXiv.2403.13372)
```
48550/arXiv.2403.13372.
```
Brian D Ziebart. Modeling purposeful adaptive behavior with the principle of maximum
_causal entropy. Carnegie Mellon University, 2010._
Daniel M. Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B. Brown, Alec Radford, Dario Amodei,
Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences,
2020.
A Appendix
A.1 Prompt used in MCTS
Prompts for Round 1 and Round 2 are listed below.
Round 1 2-shot prompt
```
You are a powerful agent with advanced reasoning and planning
capabilities. Answer the questions as best you can.
!!!Remember:
1. Your answer should have two sections: "Plans" and "Detailed
Implementation".
2. In the "Plans" section, you should outline step-by-step plans for
solving the problem. These plans might include extracting key
information, forming sub-questions, analyzing aspects, etc. Each
step should introduce new insights, avoid overly abstract or generic
actions. End each step with "<endstep>".
3. In the "Detailed Implementation" section, provide detailed steps
that correspond to each plan, and conclude with "The final answer is
\boxed{answer}.<endsolution>"
The following is a template for your answer:
Question: The input question
Plans:
Plan 1: Describe the first plan step.<endstep>
Plan 2: Describe the second plan step<endstep>
...
```
-----
```
Plan N: Describe the final plan step<endstep>
Detailed Implementation:
1. Execute the first plan step
2. Execute the second plan step
...
N. Execute the final plan step
The final answer is \boxed{answer}.<endsolution>
The following are 2 demonstration examples.
Question: Natalia sold clips to 48 of her friends in April, and then
she sold half as many clips in May. How many clips did Natalia sell
altogether in April and May?
Plans:
Plan 1: Analyze the total number of clips sold in April.<endstep>
Plan 2: Calculate the number of clips sold in May by applying the
"half as many" condition to the number sold in April.<endstep>
Plan 3: Sum the results from April and May to determine the overall
total of clips sold over the two months.<endstep>
Detailed Implementation:
1. Natalia sold 48 clips in April.
2. The number of clips sold in May is $\frac{48}{2}=24$.
3. The total number of clips sold in April and May combined is
$48+24=72$.
The final answer is \boxed{72}.<endsolution>
Question: If $xˆ2+yˆ2=1$, what is the largest possible value of
$|x|+|y|$?
Plans:
Plan 1: Understand that the equation $xˆ2+yˆ2=1$ defines a circle
centered at the origin with a radius of 1. To maximize $|x|+|y|$, we
need to consider points on this circle that maximize the sum of the
absolute values of $x$ and $y$.<endstep>
Plan 2: Recognize that $|x|+|y|$ is maximized when both $|x|$ and
$|y|$ are large. The maximum sum occurs along lines where $x$ and
$y$ contribute equally, specifically along the lines $y=x$ and
$y=-x$.<endstep>
Plan 3: Identify the points of intersection between the lines $y=x$
and $y=-x$ with the circle $xˆ2+yˆ2=1$. These points are expected to
yield the maximum value of $|x|+|y|$.<endstep>
Plan 4: Evaluate $|x|+|y|$ for the intersection points to determine
the maximum possible value.<endstep>
Detailed Implementation:
1. The circle $xˆ2+yˆ2=1$ is centered at the origin with a radius of
1. We need to find the points on this circle that maximize the sum
$|x|+|y|$.
2. To maximize $|x|+|y|$, the sum is largest when both $|x|$ and
$|y|$ are large. This occurs along the lines $y=x$ and $y=-x$, where
$x$ and $y$ contribute equally to the sum.
```
-----
```
3. The intersection points are
$\left(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)$,
$\left(\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}}\right)$,
$\left(-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)$, and
$\left(-\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}}\right)$.
4. For these points, calculate $|x|+|y|$. For
$\left(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)$,
$|x|+|y|=\sqrt{2}$. The same value applies to the other points.
Therefore, the maximum value is $\sqrt{2}$.
The final answer is $\boxed{\sqrt{2}}$.<endsolution>
Now! It's your turn.
```
Round 2 XML 2-shot prompt
```
<question>
Question: Natalia sold clips to 48 of her friends in April, and then
she sold half as many clips in May. How many clips did Natalia sell
altogether in April and May?
</question>
<plan>
<step>
Plan 1: Analyze the total number of clips sold in April.
</step>
<step>
Plan 2: Calculate the number of clips sold in May by applying the
"half as many" condition to the number sold in April.
</step>
<step>
Plan 3: Sum the results from April and May to determine the overall
total of clips sold over the two months.
</step>
</plan>
<solution>
1. Natalia sold 48 clips in April.
2. The number of clips sold in May is $\frac{48}{2}=24$.
3. The total number of clips sold in April and May combined is
$48+24=72$.
The final answer is \boxed{72}.
</solution>
<question>
If $xˆ2+yˆ2=1$, what is the largest possible value of $|x|+|y|$?
</question>
<plan>
<step>
Plan 1: Understand that the equation $xˆ2+yˆ2=1$ defines a circle
centered at the origin with a radius of 1. To maximize $|x|+|y|$, we
need to consider points on this circle that maximize the sum of the
absolute values of $x$ and $y.
</step>
<step>
Plan 2: Recognize that $|x|+|y|$ is maximized when both $|x|$ and
$|y|$ are large. The maximum sum occurs along lines where $x$ and
$y$ contribute equally, specifically along the lines $y=x$ and $y=-x.
</step>
```
-----
```
<step>
Plan 3: Identify the points of intersection between the lines $y=x$
and $y=-x$ with the circle $xˆ2+yˆ2=1$. These points are expected to
yield the maximum value of $|x|+|y|.
</step>
<step>
Plan 4: Evaluate $|x|+|y|$ for the intersection points to determine
the maximum possible value.
</step>
</plan>
<solution>
1. The circle $xˆ2+yˆ2=1$ is centered at the origin with a radius of
1. We need to find the points on this circle that maximize the sum
$|x|+|y|$.
2. To maximize $|x|+|y|$, the sum is largest when both $|x|$ and
$|y|$ are large. This occurs along the lines $y=x$ and $y=-x$, where
$x$ and $y$ contribute equally to the sum.
3. The intersection points are
$\left(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)$,
$\left(\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}}\right)$,
$\left(-\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)$, and
$\left(-\frac{1}{\sqrt{2}},-\frac{1}{\sqrt{2}}\right)$.
4. For these points, calculate $|x|+|y|$. For
$\left(\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\right)$,
$|x|+|y|=\sqrt{2}$. The same value applies to the other points.
Therefore, the maximum value is $\sqrt{2}$.
The final answer is $\boxed{\sqrt{2}}.
</solution>
```
A.2 Implementation Details
Table 6: Key Hyperparameters of CPL
Hyperparameter Value
_cpuct_ 1.5
Simulations N 200 (for round 1) or 100
Expand child nodes 5 (for root) or 3
Temperature 0.7
Max depth 6
SFT batch size 512
SFT learning rate 1e-5
SFT epochs 5 (for round 1) or 3
Step-APO batch size 64
Step-APO β 0.3
Step-APO learning rate 1e-6
Step-APO epochs 2
Solution step scaling factor 0.3
Lr scheduler type cosine
Warmup ratio 0.1
**Experiment Environments We implement our Step-APO in Llama Factory (Zheng et al.,**
2024) and use Llama Factory as the training framwork. We use vLLM (Kwon et al., 2023) as
the inference framework. We train all models with DeepSpeed ZeRO Stage2 (Rajbhandari
et al., 2021), Flash Attention 2 (Dao, 2023).
The key hyperparameter of CPL is listed in Table 6.
-----
| [
"Tianlong, Wang",
"Xueting, Han",
"Jing, Bai"
] | 2024-09-13T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.08642 | https://arxiv.org/abs/2409.08642 | null |
Can AI prove creatively? | N/A | null | [
", Irina"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | https://aitp-conference.org/2024/abstract/AITP_2024_paper_29.pdf | null | null |
|
Can LLMs Compute with Reasons? | Large language models (LLMs) often struggle with complex mathematical tasks, prone to "hallucinating" incorrect answers due to their reliance on statistical patterns. This limitation is further amplified in average Small LangSLMs with limited context and training data. To address this challenge, we propose an "Inductive Learning" approach utilizing a distributed network of SLMs. This network leverages error-based learning and hint incorporation to refine the reasoning capabilities of SLMs. Our goal is to provide a framework that empowers SLMs to approach the level of logic-based applications achieved by high-parameter models, potentially benefiting any language model. Ultimately, this novel concept paves the way for bridging the logical gap between humans and LLMs across various fields. | The goal is to provide a framework that empowers SLMs to approach the level of logic-based applications achieved by high-parameter models, potentially benefiting any language model. | ## Can LLMs Compute with Reasons?
**Harshit Sandilya** [1 2] **Peehu Raj** [3] **Jainit Sushil Bafna** [1 4] **Srija Mukhopadhyay** [1 4] **Shivansh Sharma** [5]
**Ellwil Sharma** [1] **Arastu Sharma** [1] **Neeta Trivedi** [3] **Manish Shrivastava** [4] **Rajesh Kumar** [2]
**Abstract**
Additionally, new methods that focus on creating smaller
models trained on ”textbook-quality” data have also seen
tremendous success. (Li et al., 2023b).
**2. Related Work**
Large language models (LLMs) often struggle
with complex mathematical tasks, prone to ”hallucinating” incorrect answers due to their reliance
on statistical patterns. This limitation is further amplified in average Small LangSLMs with
limited context and training data. To address
this challenge, we propose an ”Inductive Learning” approach utilizing a distributed network of
SLMs. This network leverages error-based learning and hint incorporation to refine the reasoning capabilities of SLMs. Our goal is to provide
a framework that empowers SLMs to approach
the level of logic-based applications achieved by
high-parameter models, potentially benefiting any
language model. Ultimately, this novel concept
paves the way for bridging the logical gap between humans and LLMs across various fields.
**1. Introduction**
**2.1. Improving Mathematical Reasoning of LLMs**
Various methods are employed to improve the mathematical
reasoning capabilities of LLMs. One commonly applied
method is continual pre-training (Azerbayev et al., 2023).
The model is trained on large-scale mathematical datasets,
fine-tuning it by continuing the pre-training process.
Another approach employed is supervised fine-tuning (Yuan
et al., 2023; Luo et al., 2023), where high-quality questionanswer pairs are collected through various techniques and
then used to fine-tune the model to enhance its performance.
Synthetically constructed datasets are often used for this
process. These methods use LLMs to generate the data,
followed by various augmentation methods (Yuan et al.,
2023; Yu et al., 2023; Li et al., 2023a) to filter the data.
Various reasoning frameworks are also implemented to bring
out the best reasoning answers. This includes promptingbased (Wei et al., 2022; Chen et al., 2023) and selfconsistency (Wang et al., 2023) methods where the model
uses majority voting to decide among various rational paths.
The most commonly used approaches are Chain-of-Thought
reasoning (Wei et al., 2022; Nye et al., 2021) and Programof-Thought reasoning (Chen et al., 2023). Over the recent
past, new methods like Equation-of-Thought Distillation
(Zhu et al., 2024), which work on a similar principle, have
also emerged.
There is also the emergence of Program-aided Language
Model (PAL) (Gao et al., 2023), where code is generated
from LLMs, which is later passed through an external API
to generate the final output. A similar approach also uses
symbolic solvers to implement the same (He-Yueya et al.,
2023).
Large Language Models(LLMs) have proved extremely capable of performing various generative tasks (Brown et al.,
2020; Chen et al., 2021). Open source models have also
shown considerable success in this field (Beeching et al.,
2023). However, reasoning tasks, especially mathematical
reasoning, continue to pose a massive challenge for all of
these models (Hendrycks et al., 2021; Espejel et al., 2023).
There has been considerable work on improving the performance of LLMs for mathematical reasoning tasks, including
training on specialized datasets through different fine-tuning
approaches (Yuan et al., 2023; Luo et al., 2023) or continual
pre-training (Taylor et al., 2022). Other methods include
using different prompting mechanisms to obtain better answers and enhance the reasoning capabilities of these LLMs
(Wei et al., 2022; Chen et al., 2023; Gao et al., 2023).
1ShodhLab.ai 2Raman Lab, Malaviya National Institute of Technology, India [3]Inferigence Quotient Private Limited [4]Language
Technologies Research Center, KCIS, IIIT Hyderabad [5]Suresh
Gyan Vihar University, India. Correspondence to: Harshit Sandilya
_<[email protected]>._
**2.2. Knowledge Distillation based methods**
Knowledge Distillation (Hinton et al., 2015; Magister et al.,
2023; Shridhar et al., 2023) is a method to transfer knowl
-----
**ShodhLab**
edge from a general usage LLM to a smaller, efficient Small
Language Model (SLM) without losing validity.
Various methodologies might be used for the same, which
include response-based knowledge transfer (Hinton et al.,
2015; Jin et al., 2019), feature-based knowledge transfer
(Roheda et al., 2018; Heo et al., 2018), and relation-based
knowledge transfer (Yim et al., 2017; Li et al., 2023a).
Reasoning generated from prompts like Chain-of-Thought
(CoT) (Wei et al., 2022), Program-of-Thought (PoT) (Chen
et al., 2023), and Equation-of-Thought (EoT) (Zhu et al.,
2024), along with their combinations, might also be used to
distill knowledge from LLMs to SLMs.
**2.3. Ensemble Methods**
Ensemble methods (Ganaie et al., 2021) involve creating
multiple models and combining them to produce better results. Various techniques combine these models, such as
majority voting, confidence scoring, and aggregation. Additionally, other models can be used, as demonstrated through
LLM-Blender (Jiang et al., 2023).
Our approach is closely related to the work done in
TinyGSM (Liu et al., 2023), which studies the importance
of high quality datasets for enhancing the mathematical reasoning capabilities of SLMs. Another closely related work
is MathPrompter (Imani et al., 2023) which uses Zero-shot
chain-of-thought prompting to generate multiple reasonings
for the same function to raise the confidence level in output
results.
However, we improve on these by using a distributed network of N model pairs. These models perform computations
in parallel and contribute to the voting process. Thus, we
increase efficiency while being able to obtain more accurate
results.
**3. Methodology**
We introduce the ”Inductive Learning” approach for reasoning enhancement in this section. As seen before, many
advancements have used solvers to rectify the error. The
main area of improvement seen was the missing reasoning.
Our network improves upon the reasoning by introducing
a second LLM, and to keep the probability of correctness
still equivalent to the single LLM-based reasoning, we made
some modifications to the network.
**3.1. Reasoning Ability**
The network, as shown in Figure 1, is the basis of the reason.
The upper LLM is termed GP in the discussion below, while
the lower one is termed EQ.
Our architecture leverages the context mechanism to auto
**Algorithm 1 Cross Inference**
**Function match(GP, EQ)**
**Input: lists GP and EQ**
**Output: index of the element in GP with the highest**
matching score
Initialize matching score ← a list of 0s with length equal
to the length of EQ
**for i = 1 to length of GP do**
**for j = 1 to length of EQ do**
**if GP[i] = EQ[j] then**
matching score[j] ← matching score[j] + 1
**end if**
**end for**
**end for**
**return index of the maximum value in matching score**
correct itself using hints from the other counterpart. Both the
LLMs are used to solve a question. GP solves the question
using logic and reasoning, while EQ converts the question
to a computation task that can be accomplished using any
programming language. Both produce results independent
of each other, and finally, we compare the results to check
them.
Here, two possibilities emerge:
i Both give the same answer: The answer might be correct.
Forward it as output.
ii Both give a different answer: One or both LLMs made a
mistake. Give them hints in a context like ’The equations
we used were’ and ’Logically thinking we might get the
answer.’
Suppose the probability that a GP gives the correct answer is Preason while for an EQ, it’s Pnumerical. So,
for the final output to match, we have a probability of
_Ptotal = Preason × Pnumerical, which is reduced to the_
original probability as P < 1. But the surety of getting the
correct answer increases.
**3.2. Cross Inference**
To overcome this and get a meaningful result, we run the
network as shown in Figure 2
Here, we have n LLMs working in parallel and have to find
the correct answer. We can use the following equation to
find the probability of getting the correct answer:
_Psingle_ (1)
_k=1_
X
_Ptotal =_
where
_Psingle = Preason × Pnumerical_ (2)
-----
**ShodhLab**
_Figure 1. Network Topology_
ing in terms of Algorithm 1, we can conclude:
_δij_ (3)
_matching scorei =_
Where δij is zero if the pair doesn’t match and one if it
matches, and the probability that each δij is correct depends on the independent probability of both being correct,
giving us Preason × Pnumerical again. We then do the
_max(matching scorei) to find the final answer and send it_
forward.
Here, we assume that the network still learns from its counterpart, but instead, we now treat the inference over the
whole network rather than treating them as individual networks. This gives us a boost over the conventional distributed architecture and increases our chances of finding
the correct answer while still giving us an answer if no answer matches. This shows the potential to replace single
LLM over logical tasks.
**4. Results**
**4.1. Experimental Setup**
4.1.1. DATASET DESCRIPTION
We utilized two distinct datasets, both containing questions
from the Grade School Math 8K (GSM8K) dataset (Cobbe
et al., 2021). These datasets, however are different in the nature of their answers. The first dataset, termed the ”GSM8Kcode dataset,” contained answers as equations in the form
of executable Python code snippets. Notably, the GSM8Kcode dataset was generated by us utilizing the state-of-theart GPT 3.5 model (OpenAI, 2022) developed by OpenAI.
_Figure 2. Distributed computation_
_Figure 3. The cross inference for increasing probability_
Every pair here is mutually exclusive and learns only from
its counterpart. We can modify the network to include crossinference, increasing our probability of finding the correct
answer. To do so, let’s assume we store every LLM’s output
in a matrix. Thus, we have two such matrices; one contains
answers for GPs, and the other contains answers for the EQs.
We try to find the matching pairs that give the same output,
as shown in Figure 3
We calculate a matching score for every GP. And finally,
the answer is the one with the highest matching score. Think
-----
**ShodhLab**
The GSM8K-code dataset was utilized for fine-tuning the
Python Equation (EQ) models responsible for generating
answers as equations in the form of Python code. The second dataset, referred to as the ”GSM8K-base dataset,” is
the same as the original GSM8K dataset on the HuggingFace dataset hub. GSM8K-base dataset was utilized for
fine-tuning General Purpose (GP) models responsible for
generating answers in natural language.
4.1.2. FINE-TUNING MODELS
We employed an experimental setup consisting of eight Phi
1.5 models (Li et al., 2023b), fine-tuned utilizing a cluster
of eight Graphical Processing Units (GPUs). Within this
setup, four GPUs were allocated for fine-tuning GP models
targeting the generation of general answers on the GSM8Kbase dataset. Concurrently, the remaining four GPUs were
dedicated to fine-tuning EQ models on the GSM8K-code
dataset to facilitate the generation of Python code using the
Hugging Face transformers API (Wolf et al., 2019).
4.1.3. INFERENCE
Each GPU in the configuration was an NVIDIA A100, featuring 40GB of VRAM. To distribute the workload efficiently, four GP models and four EQ models were instantiated, with each GP model and EQ model assigned to separate GPUs. We employed the multiprocessing module of PyTorch (Paszke et al., 2019) to spawn individual GP
models and EQ models on their respective GPUs.
Upon completion of inference for each model, Torch’s
torch.barrier was utilized to synchronize all GPUs,
ensuring consistent progress across the distributed system.
Post-inference, the outputs of the EQ models were compared against the outputs of their corresponding GP models.
In cases where disparities arose between the outputs, reprompting and re-inference were executed exclusively for
GP models whose outputs did not align with their corresponding EQ model outputs.
To validate the final outputs comprehensively, a voting mechanism was implemented, cross-verifying the outputs obtained from multiple models.
We can see varied results, from the actual increase in logic
for solving math problems to some being missed at the last
steps. We even see some wild cases where both are wrong
and still give the same answer. We cover everyone in their
different sections. To assess the overall efficacy of our network, we compared its performance with the benchmark set
by the pre-trained Phi 1.5 model on the GSM8K benchmark.
The Phi 1.5 model achieved a benchmark performance of
12.58%, providing a reference point for our network’s performance in algebraic reasoning tasks.
**4.2. GP Evaluation**
We initiated our evaluation by fine-tuning the GP (top LLM)
on the GSM8K Train Dataset using the pre-trained Phi
1.5 model. Four runs of inference were performed on the
GSM8K benchmark with the same dataset, considering four
different pairs of LLMs in our network topology. The individual benchmark performances of the GP models (denoted as GP I, GP II, GP III, and GP IV ) are shown in
Table 1
GP I GP II GP III GP IV
33.49 32.46 34.44 34.72
_Table 1. Output for GPs_
Slight variations in performance among the models are attributed to the inherent variability in the fine-tuning process.
On average, the fine-tuned GP model demonstrated a performance of 33.1%.
**4.3. EQ Evaluation**
The EQ (lower LLM) played a crucial role in
generating equations, subsequently processed by a
Python solver. Synthetically generated fine-tuning data
was used to evaluate the EQ models (denoted as
_EQ I, EQ II, EQ III and EQ IV ) on their ability to pro-_
duce correct solutions. The individual performances of these
models are as shown in Table 2
EQ I EQ II EQ III EQ IV
51.53 51.74 52.61 53.30
_Table 2. Output for EQs_
On average, the EQ models demonstrated an effective performance of 52.30%.
**4.4. Single Network Evaluation**
Running every network independently we can generate an
output as shown in Table 3. We could also see the edge
cases here where both match and still couldn’t match the
actual answer.
I II III IV
_GM = EQ ̸= Ans_ 27.81 27.85 29.04 30.78
_GM = EQ = Ans_ 25.53 25.47 26.67 27.55
_Table 3. Running Network Independently_
These percentages represent the extent to which the outputs
of the GP and EQ LLMs align during the cross-checking
loops. The variations in matching percentages highlight the
-----
**ShodhLab**
dynamic nature of the iterative improvement process, with
increasing alignment observed across subsequent pairs.
Although the hallucinations are less, they greatly impact
our results. Still, our network manages to reduce them with
cross-inference. A single network can be seen decreasing
in probability as assumed in the section before hence the
actual run consisted of the cross inference.
**4.5. Topology output**
The network integration involves four loops of the LLM
pair (GP and EQ), designed to cross-check and refine the
generated answers iteratively.
When the entire network is run, the main output, represented
by the Topology, is observed to be 50.29%. This significant
improvement is noteworthy when compared to individual
components. Specifically, the improvement is as follows:
GP (top LLM) alone achieved around 33%; with topology,
we see an improvement of 17.3% average. GP + EQ (matching and correct alignment) together yielded only 26%, while
the topology implementation was improved by 25.29% on
average.
The main objective was teaching the GP the logic behind the
problem by breaking it down into equations close to what
humans do in the real world. The concept of breaking the
problem logically helped the LLM build its own reason and
improve using the established facts.
**4.6. Comparative Analysis with other approaches**
In this section, we aim to provide a comparative analysis
of our approach with other similar efforts in the field of
enhancing mathematical reasoning using large language
models. We will specifically discuss two notable works, one
mentioned in the Phi 1.5 technical report and another in the
context of TinyGSM.
4.6.1. COMPARISON WITH PHI 1.5 TECHNICAL REPORT
The Phi 1.5 technical report (Li et al., 2023b) discusses
a similar effort where Python output is employed through
a Python solver for fine-tuning purposes, and the model’s
performance is evaluated on the GSM8K benchmark. In
that case, the GSM8K performance of Phi 1.5 is reported to
be 40%.
In contrast, our novel network architecture, incorporating
GP and EQ LLMs with synthetic data fine-tuning, achieves
a main output of 50.29%. This suggests a substantial improvement over the baseline set by the Phi 1.5 model. The
utilization of cross-checking loops and the distributed learning method, particularly the LLM pair voting mechanism,
contributes to the enhanced verification of overall output.
4.6.2. COMPARISON WITH TINYGSM
Another relevant work, discussed in TinyGSM (Liu et al.,
2023), involves a detailed exploration of improving output
through fine-tuning and verification loops with two different LLMs. While the output on GSM8K is reported to be
better in this method, we argue that our approach excels in
transferring logic more effectively.
4.6.3. COMPARISON WITH TINYGSM
In TinyGSM, a substantial increase in performance on the
GSM8K test set was reported, rising from 44.6% to 68.2%.
This represents an impressive percentage increase of approximately 23.6%.
Comparative Analysis of Percentage Increases Now, let’s
compare this with the percentage increases achieved in our
approach:
**Improvement in GP model:** Our GP (top LLM) performance was initially around 33%, and after the implementation of our network architecture, the GP improved by an
average of 17.3%.
**Topology Output Improvement:** The collaborative efforts of GP and EQ LLMs resulted in a notable improvement,
with GP and EQ matched with ground truth is increasing by
an average of 25.29%.
Our network’s distributed learning method, incorporating
the collaborative efforts of GP and EQ LLMs in crosschecking loops, aims to transfer reasoning and logic in a
more refined manner. The voting mechanism employed in
our LLM pair contributes to a better-verified overall output.
While the percentage increase in GP is substantial at
**17.3%, it is important to note that the comparison metric in**
TinyGSM represents a different aspect of performance on
the GSM8K test set. Additionally, the combined improve**ment of by 25.29% further underscores the effectiveness**
of our network architecture.
Our approach demonstrates competitive percentage increases in relevant metrics, showcasing advancements in
mathematical reasoning capabilities. The comparison with
TinyGSM highlights the diverse methodologies employed in
the field and the need for nuanced evaluation to understand
the strengths and limitations of each approach.
4.6.4. FUTURE RESEARCH CONSIDERATIONS
While our approach demonstrates promising results, we
acknowledge the need for further research to comprehensively compare different methodologies. Comparative studies should delve into the intricacies of the approaches, considering not only benchmark performance but also the effi
-----
**ShodhLab**
ciency in logic transfer and verification mechanisms.
In conclusion, our network architecture presents advancements in mathematical reasoning capabilities, outperforming the benchmark set by the Phi 1.5 model. The collaborative efforts of GP and EQ LLMs, coupled with the distributed learning method, set our approach apart. Ongoing
research endeavors will be crucial for a deeper understanding and comparison of different methodologies in this evolving domain.
**4.7. Logic Improvement**
The back loop hints to both EQ and GP to rethink their
strategies. In due time, we can observe the growth of logic
in the GP as it tries to get towards the answer. It is evident
from the following examples we observed:
QUESTION: A candle melts by 2 centimetres every
hour that it burns. How many centimetres shorter
will a candle be after burning from 1:00 PM to
5:00 PM?
ANSWER: 8 centimeters
INITIAL: The candle will have burned for 5-1 = 4
hours. After 4 hours, the candle will have melted
4 * 2 = 8. Thus, the candle will be 8 - 2 = 6 centimeters shorter after burning from 1:00 PM to
5:00 PM.
LOOPED: Burning from 1:00 PM to 5:00 PM is a total
of 5 - 1 = 4 hours. The candle will melt 4 * 2 =8
centimeters shorter.
In some cases, even though it didn’t get the answer right,
we could still see a logical improvement, as shown below.
QUESTION: A new program had 60 downloads in the
first month. The number of downloads in the
second month was three times as many as those
in the first month but then reduced by 30% in
the third month. How many downloads did the
program have in total over the three months?
ANSWER: 366
INITIAL: The number of downloads in the second month was 3*60=180 downloads. The
number of downloads in the third month was
180*(100/100)=180 downloads. The total number of downloads over the three months is
60+180+180=420 downloads.
LOOPED: The number of downloads in the second
month was 3*60= 180. In the third month, the
number of downloads was 180*.7=126. In total,
the number of downloads in the three months was
126+180+60=426
**5. Conclusion**
We can see the performance of our network close to a finetuned LLM with equation-based context. Using our ”Indusction Learning” approach we can see the results vary
from 33% of a fine-tuned model to 26% for the initial network, finally improving up to 50.29% for the distributed
network. The key feature discriminating our network is the
logic improvement of GPs. The GP improve their logic
as we supply them with hints. In the future, using a large
language model relying on reinforcement learning to tune
the model, we will strive to improve the logic of the upper
LLM as a standalone unit. We can use the trained LLM
with higher logic to solve problems. This can be applied to
various fields where the training of the initial model can be
made more precise using these types of networks.
Compared to state-of-the-art of the appraich, our model
surpasses the Phi 1.5 baseline, which achieved a 40.2% reasoning output using a solver, compared to our 50.29%. Despite TinyGSM’s impressive increase from 44.6% to 68.2%,
our collaborative GP and EQ approach demonstrates competitive performance with a 25.29% increase, showcasing
advancements in mathematical reasoning capabilities.
**References**
Azerbayev, Z., Schoelkopf, H., Paster, K., Santos, M. D.,
McAleer, S., Jiang, A. Q., Deng, J., Biderman, S., and
Welleck, S. Llemma: An open language model for mathematics, 2023.
Beeching, E., Fourrier, C., Habib, N., Han, S.,
Lambert, N., Rajani, N., Sanseviero, O., Tunstall, L., and Wolf, T. Open llm leaderboard. [https://huggingface.co/spaces/](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
[HuggingFaceH4/open_llm_leaderboard,](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
2023.
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan,
J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G.,
Henighan, T., Child, R., Ramesh, A., Ziegler, D. M.,
Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E.,
Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C.,
McCandlish, S., Radford, A., Sutskever, I., and Amodei,
D. Language models are few-shot learners. _CoRR,_
[abs/2005.14165, 2020. URL https://arxiv.org/](https://arxiv.org/abs/2005.14165)
[abs/2005.14165.](https://arxiv.org/abs/2005.14165)
Chen, M., Tworek, J., Jun, H., Yuan, Q., de Oliveira Pinto,
H. P., Kaplan, J., Edwards, H., Burda, Y., Joseph, N.,
Brockman, G., Ray, A., Puri, R., Krueger, G., Petrov,
-----
**ShodhLab**
M., Khlaaf, H., Sastry, G., Mishkin, P., Chan, B., Gray,
S., Ryder, N., Pavlov, M., Power, A., Kaiser, L., Bavarian, M., Winter, C., Tillet, P., Such, F. P., Cummings,
D., Plappert, M., Chantzis, F., Barnes, E., HerbertVoss, A., Guss, W. H., Nichol, A., Paino, A., Tezak,
N., Tang, J., Babuschkin, I., Balaji, S., Jain, S., Saunders, W., Hesse, C., Carr, A. N., Leike, J., Achiam,
J., Misra, V., Morikawa, E., Radford, A., Knight, M.,
Brundage, M., Murati, M., Mayer, K., Welinder, P., McGrew, B., Amodei, D., McCandlish, S., Sutskever, I.,
and Zaremba, W. Evaluating large language models
trained on code. CoRR, abs/2107.03374, 2021. URL
[https://arxiv.org/abs/2107.03374.](https://arxiv.org/abs/2107.03374)
Chen, W., Ma, X., Wang, X., and Cohen, W. W. Program
of thoughts prompting: Disentangling computation from
reasoning for numerical reasoning tasks, 2023.
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H.,
Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano,
R., Hesse, C., and Schulman, J. Training verifiers to solve
math word problems. CoRR, abs/2110.14168, 2021. URL
[https://arxiv.org/abs/2110.14168.](https://arxiv.org/abs/2110.14168)
Espejel, J. L., Ettifouri, E. H., Alassan, M. S. Y., Chouham,
E. M., and Dahhane, W. Gpt-3.5, gpt-4, or bard? evaluating llms reasoning ability in zero-shot setting and performance boosting through prompts, 2023.
Ganaie, M. A., Hu, M., Tanveer, M., and Suganthan,
P. N. Ensemble deep learning: A review. _CoRR,_
[abs/2104.02395, 2021. URL https://arxiv.org/](https://arxiv.org/abs/2104.02395)
[abs/2104.02395.](https://arxiv.org/abs/2104.02395)
Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang, Y.,
Callan, J., and Neubig, G. Pal: Program-aided language
models, 2023.
He-Yueya, J., Poesia, G., Wang, R. E., and Goodman, N. D.
Solving math word problems by combining language
models with symbolic solvers, 2023.
Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart,
S., Tang, E., Song, D., and Steinhardt, J. Measuring
mathematical problem solving with the MATH dataset.
_[CoRR, abs/2103.03874, 2021. URL https://arxiv.](https://arxiv.org/abs/2103.03874)_
[org/abs/2103.03874.](https://arxiv.org/abs/2103.03874)
Heo, B., Lee, M., Yun, S., and Choi, J. Y. Knowledge
transfer via distillation of activation boundaries formed
by hidden neurons. CoRR, abs/1811.03233, 2018. URL
[http://arxiv.org/abs/1811.03233.](http://arxiv.org/abs/1811.03233)
Hinton, G., Vinyals, O., and Dean, J. Distilling the knowledge in a neural network, 2015.
Imani, S., Du, L., and Shrivastava, H. Mathprompter: Mathematical reasoning using large language models, 2023.
Jiang, D., Ren, X., and Lin, B. Y. Llm-blender: Ensembling large language models with pairwise ranking and
generative fusion, 2023.
Jin, X., Peng, B., Wu, Y., Liu, Y., Liu, J., Liang, D., Yan, J.,
and Hu, X. Knowledge distillation via route constrained
[optimization. CoRR, abs/1904.09149, 2019. URL http:](http://arxiv.org/abs/1904.09149)
[//arxiv.org/abs/1904.09149.](http://arxiv.org/abs/1904.09149)
Li, C., Yuan, Z., Yuan, H., Dong, G., Lu, K., Wu, J., Tan,
C., Wang, X., and Zhou, C. Query and response augmentation cannot help out-of-domain math reasoning generalization, 2023a.
Li, Y., Bubeck, S., Eldan, R., Giorno, A. D., Gunasekar,
S., and Lee, Y. T. Textbooks are all you need ii: phi-1.5
technical report, 2023b.
Liu, B., Bubeck, S., Eldan, R., Kulkarni, J., Li, Y., Nguyen,
A., Ward, R., and Zhang, Y. Tinygsm: achieving ¿80
Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X.,
Lin, Q., Chen, S., and Zhang, D. Wizardmath: Empowering mathematical reasoning for large language models
via reinforced evol-instruct, 2023.
Magister, L. C., Mallinson, J., Adamek, J., Malmi, E., and
Severyn, A. Teaching small language models to reason,
2023.
Nye, M. I., Andreassen, A. J., Gur-Ari, G., Michalewski,
H., Austin, J., Bieber, D., Dohan, D., Lewkowycz, A.,
Bosma, M., Luan, D., Sutton, C., and Odena, A. Show
your work: Scratchpads for intermediate computation
with language models. CoRR, abs/2112.00114, 2021.
[URL https://arxiv.org/abs/2112.00114.](https://arxiv.org/abs/2112.00114)
OpenAI. Introducing chatgpt.
https://openai.com/blog/chatgpt, 2022.
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J.,
Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga,
L., Desmaison, A., Kopf, A., Yang, E. Z., DeVito, Z.,¨
Raison, M., Tejani, A., Chilamkurthy, S., Steiner, B.,
Fang, L., Bai, J., and Chintala, S. Pytorch: An imperative
style, high-performance deep learning library. CoRR,
[abs/1912.01703, 2019. URL http://arxiv.org/](http://arxiv.org/abs/1912.01703)
[abs/1912.01703.](http://arxiv.org/abs/1912.01703)
Roheda, S., Riggan, B. S., Krim, H., and Dai, L. Crossmodality distillation: A case for conditional generative
adversarial networks, 2018.
Shridhar, K., Stolfo, A., and Sachan, M. Distilling reasoning
capabilities into smaller language models, 2023.
Taylor, R., Kardas, M., Cucurull, G., Scialom, T., Hartshorn,
A., Saravia, E., Poulton, A., Kerkez, V., and Stojnic, R.
Galactica: A large language model for science, 2022.
-----
**ShodhLab**
Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang,
S., Chowdhery, A., and Zhou, D. Self-consistency improves chain of thought reasoning in language models,
2023.
Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi,
E. H., Le, Q., and Zhou, D. Chain of thought prompting elicits reasoning in large language models. CoRR,
[abs/2201.11903, 2022. URL https://arxiv.org/](https://arxiv.org/abs/2201.11903)
[abs/2201.11903.](https://arxiv.org/abs/2201.11903)
Wolf, T., Debut, L., Sanh, V., Chaumond, J., Delangue,
C., Moi, A., Cistac, P., Rault, T., Louf, R., Funtowicz, M., and Brew, J. Huggingface’s transformers:
State-of-the-art natural language processing. _CoRR,_
[abs/1910.03771, 2019. URL http://arxiv.org/](http://arxiv.org/abs/1910.03771)
[abs/1910.03771.](http://arxiv.org/abs/1910.03771)
Yim, J., Joo, D., Bae, J., and Kim, J. A gift from knowledge
distillation: Fast optimization, network minimization and
transfer learning. In 2017 IEEE Conference on Computer
_Vision and Pattern Recognition (CVPR), pp. 7130–7138,_
2017. doi: 10.1109/CVPR.2017.754.
Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok,
J. T., Li, Z., Weller, A., and Liu, W. Metamath: Bootstrap your own mathematical questions for large language
models, 2023.
Yuan, Z., Yuan, H., Li, C., Dong, G., Lu, K., Tan, C., Zhou,
C., and Zhou, J. Scaling relationship on learning mathematical reasoning with large language models, 2023.
Zhu, X., Li, J., Liu, Y., Ma, C., and Wang, W. Distilling
mathematical reasoning capabilities into small language
models, 2024.
-----
| [
"Harshit, Sandilya",
"Peehu, Raj",
"Jainit Sushil, Bafna",
"Srija, Mukhopadhyay",
"Shivansh, Sharma",
"Ellwil, Sharma",
"Arastu, Sharma",
"Neeta, Trivedi",
"Manish, Shrivastava",
"Rajesh, Kumar"
] | 2024-02-19T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2402.12080 | https://arxiv.org/abs/2402.12080 | https://www.semanticscholar.org/paper/b18c5799f40307d0b04fd52795abb60809fed395 |
Can Large Language Models Replicate ITS Feedback on Open-Ended Math Questions? | Intelligent Tutoring Systems (ITSs) often contain an automated feedback component, which provides a predefined feedback message to students when they detect a predefined error. To such a feedback component, we often resort to template-based approaches. These approaches require significant effort from human experts to detect a limited number of possible student errors and provide corresponding feedback. This limitation is exemplified in open-ended math questions, where there can be a large number of different incorrect errors. In our work, we examine the capabilities of large language models (LLMs) to generate feedback for open-ended math questions, similar to that of an established ITS that uses a template-based approach. We fine-tune both open-source and proprietary LLMs on real student responses and corresponding ITS-provided feedback. We measure the quality of the generated feedback using text similarity metrics. We find that open-source and proprietary models both show promise in replicating the feedback they see during training, but do not generalize well to previously unseen student errors. These results suggest that despite being able to learn the formatting of feedback, LLMs are not able to fully understand mathematical errors made by students. | The capabilities of large language models (LLMs) to generate feedback for open-ended math questions, similar to that of an established ITS that uses a template-based approach are examined, finding that open-source and proprietary models both show promise in replicating the feedback they see during training, but do not generalize well to previously unseen student errors. | [
"Hunter, McNichols",
"Stephen, Fancsali",
"Jaewook, Lee",
"Andrew, Lan",
"Steve, Ritter"
] | 2024-07-08T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2405.06414 | https://arxiv.org/abs/2405.06414 | https://www.semanticscholar.org/paper/b7cff145bb69dce47b73ddd04bcd47057c9256c3 |
|
Can We Count on LLMs? The Fixed-Effect Fallacy and Claims of GPT-4 Capabilities | In this paper we explore evaluation of LLM capabilities. We present measurements of GPT-4 performance on several deterministic tasks; each task involves a basic calculation and takes as input parameter some element drawn from a large well-defined population (e.g., count elements in a list, multiply two k-digit numbers, etc). We examine several conditions per-task and perform enough trials so that statistically significant differences can be detected. This allows us to investigate the sensitivity of task-accuracy both to query phrasing and input parameter population. We find that seemingly trivial modifications in the task-prompt or input population can yield differences far larger than can be explained by sampling effects. For example, performance on a simple list-counting task varies with query-phrasing and list-length, but also with list composition (i.e., the thing-to-be-counted) and object frequency (e.g., success when an element accounts for $\approx$ 50\% of a list is different from when it accounts for $\approx$ 70\% etc). We conclude that efforts to quantify LLM capabilities easily succumb to the language-as-fixed-effect fallacy, where experimental observations are improperly generalized beyond what the data supports. A consequence appears to be that intuitions that have been formed based on interactions with humans form a very unreliable guide as to which input modifications should ``make no difference'' to LLM performance. | Evaluation of LLM capabilities is explored, finding that efforts to quantify LLM capabilities easily succumb to the language-as-fixed-effect fallacy, where experimental observations are improperly generalized beyond what the data supports. | [
"Thomas, Ball",
"Shuo, Chen",
"Cormac, Herley"
] | 2024-09-11T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.07638 | https://arxiv.org/abs/2409.07638 | https://www.semanticscholar.org/paper/36585c22349a8b485cef9435add85c355b33b851 |
|
Can We Further Elicit Reasoning in LLMs? Critic-Guided Planning with Retrieval-Augmentation for Solving Challenging Tasks | State-of-the-art large language models (LLMs) exhibit impressive problem-solving capabilities but may struggle with complex reasoning and factual correctness. Existing methods harness the strengths of chain-of-thought and retrieval-augmented generation (RAG) to decompose a complex problem into simpler steps and apply retrieval to improve factual correctness. These methods work well on straightforward reasoning tasks but often falter on challenging tasks such as competitive programming and mathematics, due to frequent reasoning errors and irrelevant knowledge retrieval. To address this, we introduce Critic-guided planning with Retrieval-augmentation, CR-Planner, a novel framework that leverages fine-tuned critic models to guide both reasoning and retrieval processes through planning. CR-Planner solves a problem by iteratively selecting and executing sub-goals. Initially, it identifies the most promising sub-goal from reasoning, query generation, and retrieval, guided by rewards given by a critic model named sub-goal critic. It then executes this sub-goal through sampling and selecting the optimal output based on evaluations from another critic model named execution critic. This iterative process, informed by retrieved information and critic models, enables CR-Planner to effectively navigate the solution space towards the final answer. We employ Monte Carlo Tree Search to collect the data for training the critic models, allowing for a systematic exploration of action sequences and their long-term impacts. We validate CR-Planner on challenging domain-knowledge-intensive and reasoning-heavy tasks, including competitive programming, theorem-driven math reasoning, and complex domain retrieval problems. Our experiments demonstrate that CR-Planner significantly outperforms baselines, highlighting its effectiveness in addressing challenging problems by improving both reasoning and retrieval. | Critic-guided planning with Retrieval-augmentation, CR-Planner is introduced, a novel framework that leverages fine-tuned critic models to guide both reasoning and retrieval processes through planning that significantly outperforms baselines on challenging domain-knowledge-intensive and reasoning-heavy tasks. | [
"Ruochen, Zhao",
"Xingxuan, Li",
"Weiwen, Xu",
"Shafiq, Joty",
"Fangkai, Jiao",
"Lidong, Bing"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01428 | https://arxiv.org/abs/2410.01428 | https://www.semanticscholar.org/paper/8e90bba98fdd41a9046ba00ad527441a447c56bb |
|
CodePMP: Scalable Preference Model Pretraining for Large Language Model Reasoning | Large language models (LLMs) have made significant progress in natural language understanding and generation, driven by scalable pretraining and advanced finetuning. However, enhancing reasoning abilities in LLMs, particularly via reinforcement learning from human feedback (RLHF), remains challenging due to the scarcity of high-quality preference data, which is labor-intensive to annotate and crucial for reward model (RM) finetuning. To alleviate this issue, we introduce CodePMP, a scalable preference model pretraining (PMP) pipeline that utilizes a large corpus of synthesized code-preference pairs from publicly available high-quality source code. CodePMP improves RM finetuning efficiency by pretraining preference models on large-scale synthesized code-preference pairs. We evaluate CodePMP on mathematical reasoning tasks (GSM8K, MATH) and logical reasoning tasks (ReClor, LogiQA2.0), consistently showing significant improvements in reasoning performance of LLMs and highlighting the importance of scalable preference model pretraining for efficient reward modeling. | CodePMP improves RM finetuning efficiency by pretraining preference models on large-scale synthesized code-preference pairs from publicly available high-quality source code, and highlighting the importance of scalable preference model pretraining for efficient reward modeling. | [
"Huimu, Yu",
"Xing, Wu",
"Weidong, Yin",
"Songlin, Hu",
"Debing, Zhang"
] | 2024-10-03T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.02229 | https://arxiv.org/abs/2410.02229 | https://www.semanticscholar.org/paper/58f614941629541c8c04acdb8acb9e3fb350ac5a |
|
CodePlan: Unlocking Reasoning Potential in Large Langauge Models by Scaling Code-form Planning | Despite the remarkable success of large language models (LLMs) on traditional natural language processing tasks, their planning ability remains a critical bottleneck in tackling complex multi-step reasoning tasks. Existing approaches mainly rely on prompting or task-specific fine-tuning, often suffering from weak robustness and cross-task generalization. To address the limitation, we introduce CODEPLAN, a scalable paradigm that empowers LLMs to generate and follow code-form plans pseudocode that outlines high-level, structured reasoning processes. By leveraging the structured and versatile nature of code, CODEPLAN effectively captures the rich semantics and control flows inherent to sophisticated reasoning. Importantly, CODEPLAN allows the automatic extraction of code-form plans from massive, wide-ranging text corpora without the need for curated, task-specific datasets. This enables it to scale up efficiently and improve reasoning capabilities across diverse scenarios. To train CODEPLAN, we construct a large-scale dataset of 2M examples that integrate code-form plans with standard prompt-response pairs from existing corpora. With minimal computation overhead during both training and inference, CODEPLAN achieves a 25.1% relative improvement compared with directly generating responses, averaged across 13 challenging multi-step reasoning benchmarks, spanning mathematical reasoning, symbolic reasoning, instruction-following, multi-hop QA, and decision-making tasks. Further analysis reveals CODEPLAN's increasing performance gains on more complex reasoning tasks, as well as significant data efficiency thanks to its generalization ability. | null | [
"Jiaxin, Wen",
"Jian, Guan",
"Wei, Wu",
"Hongning, Wang",
"Minlie, Huang"
] | 2024-09-19T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.12452 | https://arxiv.org/abs/2409.12452 | null |
|
Concept Distillation from Strong to Weak Models via Hypotheses-to-Theories Prompting | Hand-crafting high quality prompts to optimize the performance of language models is a complicated and labor-intensive process. Furthermore, when migrating to newer, smaller, or weaker models (possibly due to latency or cost gains), prompts need to be updated to re-optimize the task performance. We propose Concept Distillation (CD), an automatic prompt optimization technique for enhancing weaker models on complex tasks. CD involves: (1) collecting mistakes made by weak models with a base prompt (initialization), (2) using a strong model to generate reasons for these mistakes and create rules/concepts for weak models (induction), and (3) filtering these rules based on validation set performance and integrating them into the base prompt (deduction/verification). We evaluated CD on NL2Code and mathematical reasoning tasks, observing significant performance boosts for small and weaker language models. Notably, Mistral-7B's accuracy on Multi-Arith increased by 20%, and Phi-3-mini-3.8B's accuracy on HumanEval rose by 34%. Compared to other automated methods, CD offers an effective, cost-efficient strategy for improving weak models' performance on complex tasks and enables seamless workload migration across different language models without compromising performance. | null | [
"Emmanuel Aboah, Boateng",
"Cassiano O., Becker",
"Nabiha, Asghar",
"Kabir, Walia",
"Ashwin, Srinivasan",
"Ehi, Nosakhare",
"Victor, Dibia",
"Soundar, Srinivasan"
] | 2024-08-18T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2408.09365v1 | https://arxiv.org/abs/2408.09365 | https://www.semanticscholar.org/paper/4e3dda2c83991398b624e3bbb3fbb658659a296a |
|
Considerations on Approaches and Metrics in Automated Theorem Generation/Finding in Geometry | The pursue of what are properties that can be identified to permit an automated reasoning program to generate and find new and interesting theorems is an interesting research goal (pun intended). The automatic discovery of new theorems is a goal in itself, and it has been addressed in specific areas, with different methods. The separation of the "weeds", uninteresting, trivial facts, from the "wheat", new and interesting facts, is much harder, but is also being addressed by different authors using different approaches. In this paper we will focus on geometry. We present and discuss different approaches for the automatic discovery of geometric theorems (and properties), and different metrics to find the interesting theorems among all those that were generated. After this description we will introduce the first result of this article: an undecidability result proving that having an algorithmic procedure that decides for every possible Turing Machine that produces theorems, whether it is able to produce also interesting theorems, is an undecidable problem. Consequently, we will argue that judging whether a theorem prover is able to produce interesting theorems remains a non deterministic task, at best a task to be addressed by program based in an algorithm guided by heuristics criteria. Therefore, as a human, to satisfy this task two things are necessary: an expert survey that sheds light on what a theorem prover/finder of interesting geometric theorems is, and - to enable this analysis - other surveys that clarify metrics and approaches related to the interestingness of geometric theorems. In the conclusion of this article we will introduce the structure of two of these surveys - the second result of this article - and we will discuss some future work. | null | [
"Pedro, Quaresma",
"Pierluigi, Graziani",
"Stefano M., Nicoletti"
] | 2024-01-22T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2401.11905v1 | https://arxiv.org/abs/2401.11905 | https://www.semanticscholar.org/paper/afd7923f6ed3efa68ed7039dee8f496fd3024183 |
|
Continual Learning and Out of Distribution Generalization in a Systematic Reasoning Task | Humans often learn new problem solving strategies from a narrow range of examples and generalize to examples out of the distribution (OOD) used in learning, but such generalization remains a challenge for neural networks. This impacts learning mathematical techniques, which can apply to unbounded problem spaces (e.g. all real numbers). We explore this limitation using neural networks trained on strategies for solving specified cells in $6\times6$ Sudoku puzzles using a novel curriculum, where models first learn two preliminary tasks, then we assess OOD generalization during training on a subset of the set of possible training examples of a more complex solution strategy. Baseline models master the training distribution, but fail to generalize OOD. However, we introduce a combination of extensions that is sufficient to support highly accurate and reliable OOD generalization. These results suggest directions for improving the robustness of models trained with the highly imbalanced data distributions in natural data sets. | null | # Continual Learning and Out of Distribution Generalization in a Systematic Reasoning Task
**Mustafa Abdool**
Department of Computer Science
Stanford University
```
[email protected]
```
**Andrew J. Nam**
Department of Psychology
Stanford University
```
[email protected]
```
**Abstract**
**James L. McClelland**
Department of Psychology
Stanford University
```
[email protected]
```
Humans often learn new problem solving strategies from a narrow range of examples and generalize to examples out of the distribution (OOD) used in learning,
but such generalization remains a challenge for neural networks. This impacts
learning mathematical techniques, which can apply to unbounded problem spaces
(e.g. all real numbers). We explore this limitation using neural networks trained
on strategies for solving specified cells in 6 × 6 Sudoku puzzles using a novel
curriculum, where models first learn two preliminary tasks, then we assess OOD
generalization during training on a subset of the set of possible training examples of
a more complex solution strategy. Baseline models master the training distribution,
but fail to generalize OOD. However, we introduce a combination of extensions
that is sufficient to support highly accurate and reliable OOD generalization. These
results suggest directions for improving the robustness of models trained with the
highly imbalanced data distributions in natural data sets.
**1** **Introduction and Related Work**
The ability to learn new skills and infer abstract rules from limited data is a highly desirable property
for machine learning systems, especially in the domain of mathematical problem solving, in which
highly generalizable principles and problem solving strategies abound. Large language models
(LLMs) with billions of parameters may be able to support many advanced cognitive abilities if their
training data samples these abilities widely enough, but their ability to learn truly novel information
and to exhibit out of distribution generalization (OODG) remains limited [1].
Here we address this limitation through the popular puzzle game Sudoku, which affords this opportunity due to the inherently abstract nature of the rules and reasoning strategies required. Although
Sudoku has been solved using a neural network model [2], the network used for this fails to generalize
OOD [3], as often happens with neural networks. In contrast, humans participants who learn a multistep Sudoku strategy (the Hidden Single strategy) from a narrow range of training examples show
strong OODG. In earlier work, we found that small-scale transformers could generalize the strategy
OOD [3], but this required continuously interleaving the narrowly sampled Hidden Single (HS)
examples with examples of component strategies spanning the entire distribution of puzzle instances.
Such an approach is data-inefficient, and it is a limitation compared to the human participants in [4]
who learned and generalized without interleaving.
In this work, we achieve reliably accurate OODG of the Hidden Single technique without interleaving
simpler strategies. We define reliably accurate OODG as achieving 90% correct final accuracy on
OOD problems in 10 of 10 model runs each with a separate random seed. We describe several techniques that significantly improve OODG over the baseline model previously used in [3]. Importantly,
our baseline model is able to solve unseen examples when trained on a dataset drawn from the full
distribution of puzzles. However, it fails to generalize OOD when trained with a restricted subset of
37th Conference on Neural Information Processing Systems (NeurIPS 2023) Workshop on MATH-AI.
-----
puzzles for the HS strategy, even after previously learning from the full distribution of two component
strategies. Thus, the baseline model has sufficient capacity to solve the full problem but exhibits poor
OODG in this regime. We then introduce several extensions that allow successful OODG in this
curriculum learning setting. When combined, these extensions support reliably accurate OODG on a
challenging restriction of the training data distribution, as we detail below. Our contributions are:
1. We focus attention on out-of-distribution generalization in a cumulative learning setting
where new skills build on abilities previously acquired.
2. We introduce a task setting and dataset in which a generic baseline model fails to exhibit
OODG, despite having sufficient capacity to solve the task when trained on the entire
distribution of problems.
3. We identify several factors that each contribute to achieving robust OODG over the baseline
model.
**2** **Methods**
**2.1** **Problem Description**
**Problem overview.** Our approach is inspired by the approach taken in a tutorial used with human
participants [3]. We train networks to evaluate whether a specified digit is or is not the correct solution
to a specified target cell in a specified house type (row, column or 2 × 3 box) in a given 6 × 6 Sudoku
grid. A valid Sudoku solution is one in which each unique house contains each digit exactly once.
By specifying the target digit, cell, and house type in a partially completed grid, as in the human
experiment in [3], we avoid the challenge of finding good target cells, house-types, and digits in
Sudoku puzzles, focusing on OODG of the process of evaluating a possible solution once a possible
target cell, solution digit, and house-type have been specified.
The network is given a board state along with a target cell and target digit that might or might not go
in that cell (see Table 1.) A house type specification (row, column, or box) is also given for two of the
three tasks (Full House and Hidden Single). The final output in these cases specifies whether or not
the digit must go in the cell. Note that the solution is only ‘yes’ if there is sufficient information to
determine that the target digit can go in the specified cell with certainty.
**Sudoku Tasks.** The model training data consists of examples from three tasks.
1. Full House: Determine whether every cell other than the target cell in the specified house is
filled with a digit other than the target digit. If yes for all cells in the house other than the
target cell, the target digit must go in the target cell.
2. Can Contain: Determine whether a target cell can contain the target digit. The cell must
be empty and the target digit must not already be present in the same row, column, or box.
Note that each instance of the Can Contain task queries 5 cells, as in the other two tasks, to
balance the number of cells trained across tasks.
3. Hidden Single: Iterate over the non-target cells in the specified house (as in the Full House
strategy) and check whether any of these cells can contain the target digit. If none of the
other cells can contain it, then the target digit must be placed in the target cell.
The Hidden Single task requires combining the procedures required for the other two tasks, thus
making it a good target for investigating compositional reasoning through curriculum-based learning.
**Training curriculum and assessment of OODG.** All our experiments are conducted in a cur_riculum learning setting where the model is trained sequentially on the Full House, Can Contain,_
and Hidden Single tasks (See SM Figure 2), and all involve 10 model runs with different random
seeds. To solve the Hidden Single task, the model must combine the skill of iterating over cells in a
certain house (from the Full House dataset) and checking for direct constraints (from the Can Contain
dataset). The Full House and Can Contain training data was sampled from the full distribution of
possible puzzles (in terms of digits and house types), so that the network has the opportunity to learn
to encode and decode input and output tokens and develop a general solution for these constituent
strategies. However, the Hidden Single task is trained on a restricted set of puzzles, where key
dimensions such as digits or house types are held out. As it learns the Hidden Single task, we test the
network on held out puzzles to assess how well it generalizes OOD.
-----
Table 1: An input grid with example token sequences for all three tasks. Prompt text in normal
font, target / model-generated tokens in bold. Note that, unlike this example, an independent grid is
generated for each instance of each task.
Full House Can Contain Hidden Single
<SOS>full_house
goal_cell row 2 column 2
house_type box
digit 4
is_filled
**row 1 column 1 no**
**row 1 column 2 yes**
**row 1 column 3 no**
**row 2 column 1 yes**
**row 2 column 3 yes**
**solution no 4 <EOS>**
<SOS>
digit 2
can_contain
row 4 column 3 yes
row 1 column 1 no
row 2 column 3 no
row 5 column 2 yes
row 4 column 1 no
<EOS>
<SOS>hidden_single
goal_cell row 6 column 2
house_type column
digit 3
can_contain
**row 1 column 2 no**
**row 2 column 2 no**
**row 3 column 2 no**
**row 4 column 2 no**
**row 5 column 2 no**
**solution yes 3 <EOS>**
**2.2** **Model Description**
**Baseline transformer architecture and task sequence conventions.** We use a 3-layer transformer
with a dimensionality of 90 as our baseline model (SM Figure 1). The model receives as input the
Sudoku grid, followed by and a sequence of prompt tokens. We encode the grid as a flattened sequence
of individual cell encodings, where each cell is represented by concatenating 30 dimensional learned
embeddings of the cell’s row r, column c, and contents (digit or blank) d into a 90 dimensional
embedding vector. The sequence of text tokens (the prompt) are added to position tokens and
embedded into a 90 dimensional vector.
The model output is also a sequence of text tokens, sampled autoregressively. For the Full House and
Hidden Single problems, the output sequence contains an iteration over all cells in the house that
the target cell belongs to of the specified type, along with the answer to the "sub-problem" for each
iterated cell. Next comes a ‘solution’ token, then the answer (‘yes’ or ‘no’) indicating whether the
target digit must go in the target cell, then a repetition of the target digit. For the Can Contain task,
the prompt specifies a target digit and the ‘can-contain’ token, then iterates over 5 randomly-chosen
target cells, with a single output token (‘yes’ or ‘no’) for each, as shown in Table 1.
**Baseline model performance.** We first train the baseline model on examples drawn from the full
distribution of HS puzzles to verify that our model has sufficient capacity to learn the hidden single
technique. We then test for OODG by training the same architecture on increasingly restrictive puzzle
distributions by holding out digits only and then both digits and one house type. Even in the least
restrictive cases, we find that the baseline model only reaches 30.6% OODG accuracy despite solving
unseen within-distribution problems with near 100% accuracy.
**2.3** **Extensions that Enhance OODG**
**Larger transformer dimension with and without weight sharing.** Although the baseline model
learns to perform the HS task when trained with samples from the full distribution of HS problems,
it fails to generalize OOD with even two held out digits. For our first extension to the model, we
increase the transformer dimension (TD) from 90 to 252. We explore two variants of this, one of
which controls for the number of learnable parameters by sharing the weights across transformer
layers as in [5, 6]. In the other, non weight-sharing variant, the parameter count is about 2.5 times
that of the baseline model.
**Shared digit representation for board and text.** In the baseline model, the digits on the board are
embedded with the same matrix as the row and column indices, while the digits in the prompt text
(e.g., the target digit) are embedded using different weight matrices as in [3]. Since board and text
digits correspond, our second extension uses the same embedding matrix for digits on the board and
in the text while using a different embedding matrix for row and column indices. We hypothesize
that this might promote alignment and improve OODG.
-----
**Separate board architecture.** In the baseline model, the board (after flattening) is concatenated
with the text tokens to form the input layer for the transformer stack and shared weights are used for
processing both the board state and the solution sequence, possibly causing unhelpful competition. To
address this possibility, our third extension keeps the baseline architecture, which becomes a solution
sequence decoder, and uses a separate board encoder network to process the board state for use by
the solution sequence decoder. The board encoder network is stripped down in that it only maps each
of the cell embeddings to a key and a value with its own Wk and Wv matrices. The single set of keys
and values is queried by each of the three transformer layers in the solution-sequence decoder.
**Encouraging OODG using synaptic intelligence.** In any curriculum learning setting, new learning
can degrade connection-based knowledge acquired in previous tasks. For example, when training on
only 2 out of 3 house types in the Hidden Singles task, weights supporting iteration over the held out
house type that are acquired in the Full House task might become degraded, interfering with OODG
to problems using the held-out house type. To mitigate this, we experiment with synaptic intelligence
(SI) [7] which discourages changes to parameters that are considered important in minimizing the loss
in previous tasks. Note that we use SI to explore whether it can enhance OODG rather than to avoid
interference with performance on previous tasks as is more commonly done [8]. As a comparison to
SI, we also train models where we decrease the learning rate (labeled LR Decay in Tables) at each
task boundary in the curriculum.
**Performance measures.** The results tables below show results obtained with 10 runs of each model
variant. Peak OOD Acc refers to the highest accuracy achieved on the test set of OOD examples as
the network learns on the in-distribution training examples. Final OOD Acc refers to accuracy on the
same problems at the end of the HS training phase (see SM for further details).
**3** **Results**
**Digit generalization.** We first perform a comprehensive study to understand the impact of our
extensions on OODG relative to the baseline model (See Table 2), using a training set in which 2 of
the 6 digits are held out from use as the target digit. The baseline model with only LR decay performs
quite poorly (top line in Table 2), and the larger transformer dimension did little by itself. However,
weight sharing, shared digit representation, and separate board architecture all improve peak OODG
accuracy, up to 96.5% accuracy when combined (bolded row before SI in Table 2). In all these cases,
generalization performance erodes with continued training. Interestingly, SI alone seems to overconstrain the baseline model, preventing it from even being able to solve within-distribution (WD)
problems from the training set. However, the impact of SI significantly improves when increasing
the embedding and transformer dimension from 90 to 252, reaching over 90% accuracy on almost
all runs with or without weight sharing. We next examine a more challenging OOD setting, holding
out 4 of the 6 digits as targets during HS training (SM Table 6). Here, SI improves OODG with or
without weight sharing. The representation and architecture extensions are not beneficial without SI,
but combining them with SI allows the model to achieve reliably accurate OOGD.
Table 2: Results for Two Digit OODG
|Larger TD|Weight Shar- ing|Rep & Arch|LR De- cay|SI|Median WD Acc|Median Peak OOD Acc|Mean Final OOD Acc|Median Final OOD Acc|Runs over 90% OOD Acc|Mean updates to 90% OOD Acc|
|---|---|---|---|---|---|---|---|---|---|---|
||||✓||100%|58.7%|39.7%|37.8%|0%|-|
|✓|||✓||100%|59.1%|41.3%|40.0%|0%|-|
|✓|✓||✓||100%|81.1%|74.7%|71.7%|40%|320|
|✓|✓|Digits|✓||100%|85.4%|55.5%|59.0%|40%|620|
|✓|✓|Board|✓||100%|88.2%|66.5%|65.2%|40%|524|
|✓|✓|Both|✓||100%|96.5%|71.8%|72.6%|80%|452|
|||||✓|55.2%|55.2%|56.1%|53.1%|10%|1628|
|✓|✓|||✓|100%|98.9%|88.1%|95.1|80%|116|
|✓||||✓|100%|100%|90.1%|98.2%|90%|152|
-----
**House type and digit generalization.** Our final experiments further restrict the training set by
holding out an entire target house type (row) as well as 4 of the 6 digits as targets. Holding out all
rows makes OODG especially challenging, since the model can only succeed by combining what it
learned about iterating over rows in the Full House phase with its learning about the full house task
from training with the column and box house types. As shown in Table 3, the 252-dimension model
with the representation and architecture extensions, SI, and without weight sharing achieved reliably
accurate OODG, and did so after an average of only 104 gradient updates. Other models usually
reached high OODG accuracy within 200-1000 gradient updates, but this gradually declined as the
models continued to train on the restricted HS training set. For more detailed results, see SM Table 7
and SM Figure 4.
Table 3: Results for four-digit and house type OODG
|Larger TD.|Weight Shar- ing|Rep & Arch|LR De- cay|SI|Median Peak OOD Acc|Mean Final OOD Acc|Median Final OOD Acc|Runs over 90% OOD Acc|Mean updates to 90% OOD Acc|
|---|---|---|---|---|---|---|---|---|---|
|✓|||✓||60.5%|35.7%|36.0%|0%|-|
|✓||||✓|98.2%|80.6%|93.6%|60%|192|
|✓|✓|||✓|87.9%|74.3%|77.0%|20%|1046|
|✓||Both||✓|100%|95.7%|98.7%|100%|104|
|✓|✓|Both||✓|90.2%|77.0%|82.1%|50%|926|
**4** **Open Questions and Future Directions**
Today’s neural networks often appear to learn new tasks from just a few examples or instructions.
However, they can be extremely sensitive to prompting details and often fail to generalize knowledge
tuned into their weights [1]. Our work contributes to addressing this problem. Beginning with
a transformer network that could learn our target Sudoku task but could not generalize to OOD
examples, we find extensions that overcome this limitation within our specific problem domain.
However, we do not see our work as providing a general solution. We hope the comments below
indicate some of the open issues and approaches that could be pursued to address them.
**Avoiding problem-specific inductive biases:** Our representation and architecture enhancements
prove important in our specific task and architecture context, but they can seem problem specific.
Separating spatial and sequential networks while sharing representations of corresponding elements
(e.g. printed and spoken words) may be broadly useful, but as used here, they seem to tailor the
architecture to the problem. This points toward a broader issue that may be more generally relevant:
While problem-specific inductive biases can surely be useful, we believe it is important to find
solutions that work in large-scale models that learn to solve a wide range of different tasks.
**Finding general solutions for learning in natural contexts:** The various combinations of our
extensions and restrictions of the range of data produce a complex pattern of effects on OOD
generalization. This poses a significant challenge for the development of models that can generalize
well on a wide range of tasks, since these inter-dependencies are likely to vary from task to task.
**Achieving human-level data efficiency:** Even when we achieve reliably accurate OODG from a
narrow range of examples in our final experiment, this still requires over 100 gradient updates based
on about 20,000 training examples. In contrast, humans can typically learn the hidden single strategy
in fewer than 10 examples [4].
One approach to tackling these issues may rely on in-context learning [9] rather than the in-weights
learning we explore in this paper. In-context learning may be less narrowly focused [1], requires only
a few examples, and may prove to be more independent of architectural details. One possibility is
that a general solution will involve initial in-context learning, with the results stored for later re-use
in a hippocampus-like fast learning system, followed by gradual consolidation in weights [10].
-----
**References**
[1] Lukas Berglund, Meg Tong, Max Kaufmann, Mikita Balesni, Asa Cooper Stickland, Tomasz
Korbak, and Owain Evans. The reversal curse: Llms trained on "a is b" fail to learn "b is a",
2023.
[2] Rasmus Berg Palm, Ulrich Paquet, and Ole Winther. Recurrent relational networks, 2018.
[3] Andrew J. Nam, Mustafa Abdool, Trevor Maxfield, and James L. McClelland. Achieving
and understanding out-of-distribution generalization in systematic reasoning in small-scale
transformers, 2022.
[4] Andrew J. Nam and James L. McClelland. Systematic human learning and generalization from
a brief tutorial with explanatory feedback, 2023.
[5] Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Łukasz Kaiser. Universal transformers, 2019.
[6] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu
Soricut. Albert: A lite bert for self-supervised learning of language representations, 2020.
[7] Friedemann Zenke, Ben Poole, and Surya Ganguli. Continual learning through synaptic
intelligence, 2017.
[8] Andrea Cossu, Antonio Carta, and Davide Bacciu. Continual learning with gated incremental
memories for sequential data processing. In 2020 International Joint Conference on Neural
_Networks (IJCNN). IEEE, jul 2020._
[9] Stephanie Chan, Adam Santoro, Andrew Lampinen, Jane Wang, Aaditya Singh, Pierre
Richemond, James McClelland, and Felix Hill. Data distributional properties drive emergent in-context learning in transformers. Advances in Neural Information Processing Systems,
35:18878–18891, 2022.
[10] James L McClelland, Bruce L McNaughton, and Randall C O’Reilly. Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and
failures of connectionist models of learning and memory. Psychological review, 102(3):419,
1995.
[11] Yuxuan Li and James L. McClelland. Systematic generalization and emergent structures in
transformers trained on structured tasks, 2022.
[12] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information
_processing systems, 30, 2017._
-----
**5** **Supplementary Material**
**5.1** **Overview of rules and terminology.**
In Sudoku, the goal is to fill out a NxN grid so that exactly one of each digit from 1-N appears in
every house where a house refers to a particular instance of a row, column or box. In addition, every
cell on the board should be contained in exactly one type of each house. Although Sudoku is typically
played on a 9x9 grid, we use 6x6 grids (with 2x3 boxes) which require similar complex reasoning
procedures while reducing computational requirements.
**5.2** **Training Data Details**
The positive examples for each task (Full House, Can Contain and Hidden Single) were obtained
by incrementally solving valid 6x6 Sudoku puzzles (ie. puzzles with a single unique solution) and
checking if the strategy was applicable to solve any cell during the process.
Generating the negative examples for each of the corresponding tasks was slightly more complicated
as just sampling random incorrect digits for a given cell would likely be too easy. Specifically, for the
_Hidden Single strategy, we want the model to learn how to combine the answers for the Can Contain_
sub-problems rather than focusing on just eliminating a digit due to direct constraints. This idea is
similar to studies of humans learning to play Sudoku as in [4] - where a distractor digit was used
as an example of a more plausible negative choice. Table 4 below describes the negative sampling
strategies for each of the three tasks. All three task datasets were balanced to have an equal number
of positive and negative instances.
Table 4: Negative sampling strategies for each of the various task types
|Task Type|Negative Sampling Strategy|
|---|---|
|Full House|Sample from houses where there are unfilled cells other than the target cell or one of the cells contains the target digit|
|Can Contain|Sample from cells and target digits where the target digit has at least one direct conflict|
|Hidden Single|Sample from cells, target digits and houses where the target digit has no direct constraints but the Hidden Single strategy is not valid to solve that cell (ie. the digit can be placed in another cell in the house)|
**5.3** **Baseline Model Architecture Details**
A diagram of the baseline model architecture is shown in Figure 1.
**5.4** **Model Details**
The core of the baseline model is a 3-layer transformer encoder supported by an embedding weight
matrix for digits, grid cell positions (row and column) and the input text. To create the final
representation for each board cell, the embedding for the digit, row and column are concatenated
together. There is also a decoder layer to output a final distribution over tokens in our small
vocabulary (around 30 tokens total). Note that while all tokens after the prompt starts use the typical
autoregressive attention mask (ie. they are unable to attend to future tokens), the initial tokens which
make up the board itself (after flattening it into cells) use a special attention mask that allows any
board token to attend to any other board token. In contrast, the tokens in the prompt can only attend
to previous prompt tokens as in the usual transformer decoder case.
The overall dimension of the transformer inputs and outputs are 252 or 90 (in the Larger TD case
or default case, respectively). All text tokens are encoded using an embedding with the same
dimensionality as the transformer layer and position information is added to the vectors using the
_Sorted Random Label strategy from [11]. The 36 grid cell vectors and all token vectors are passed to_
the transformer, which is composed of 3 encoder layers with 6 heads and 1024-dim feed-forward
-----
Figure 1: Diagram of the model architecture and how it processes inputs.
layers, and the output vectors are decoded using a linear layer to form the final logit values for the
predicted output tokens.
During training, we use teacher-forcing to predict the next token at each sequence position and
cross-entropy with the target sequence. We mask the loss so that the loss is only applied after the
’digit’ token in the hidden single and full house tasks, and after the column number token in the naked
single tasks. The loss is computed using cross-entropy and the model is optimized using Adam [12]
with a base learning rate of 0.0001. We keep the same batch size of 192 samples for each stage in the
curriculum, performing a weight update at the end of every batch.
**5.5** **Curriculum and Performance Assessment Details**
See Figure 2 below for an overview of the curriculum training phase. We do 40k weight updates for
the Full House task, 30k for the Can Contain task and 50k for the Hidden Single task. We evaluate
OODG performance as measured on a test set of 1,000 OOD Hidden Single task examples. We
perform this evalation after every 50 weight updates during the first 1000 weight updates, and after
every 250 updates after that. Final OOD accuracy for a run is the accuracy over 40 evaluations
occurring within the last 10k updates.
Figure 2: The curriculum used in measuring OODG in the example of holding out two digits from
HS puzzles
-----
**5.6** **Shared Digit Representation and Separate Board Architecture Details**
To provide a shared digit representation, we re-use the embedding matrix for prompt tokens for
computing the digit embedding. Then, we concatenate the row and column embedding for a board
cell and add it to the digit embedding to produce the final embedding for a cell.
This was achieved by encoding a board cell as follows:
_ce = FFθ(De(crow)|De(ccolumn)|Ve(cdigit))_
Where Ve is the embedding matrix for text. To have the overall dimension of the cell encoding match
the text encoding, we use a feed-forward layer (FF ) to project the concatenated embedding back
into the same output dimension as Ve.
To provide a separate board representation, we first compute a key and value embedding for each
cell in the board that each decoder layer can then query independently. This is similar to the idea
used in the typical transformer decoder [12] - which can perform self-attention over the output of a
transformer encoder. A diagram for this architecture is shown in Figure 3.
Figure 3: How a shared embedding for the board state is used across all transformer layers
**5.7** **Synaptic Intelligence Details**
We use a regularization constant for c = 0.15 which determines how strongly the reference weight
from past tasks in the curriculum is weighted in the overall loss function.
**5.8** **Out of Distribution Dataset Details**
See Table 5 for a full description of the restricted datasets used to test OODG.
**5.9** **Further OODG Results**
**5.9.1** **Four Digit OODG Results**
We also ran several experiments on a four digit OOD dataset, results are shown in Table 6.
-----
Table 5: Different OOD tasks for evaluation
|Dataset type|Training Distribution for HS puzzles|Holdout Distribution for HS puzzles|
|---|---|---|
|Two Digit Generalization|Hidden single problems with target digits {1, 2, 3, 4}|Test generalization to new dig- its {5, 6}|
|Four Digit Generalization|Hidden single problems with target digits {1, 2}|Test generalization to new dig- its {3, 4, 5, 6}|
|Digit and House Type Generalization|Hidden single problems with target digits {1, 2} and only column or box house types|Test generalization on new dig- its {3, 4, 5, 6} and new house type (row)|
Table 6: Results for four digit OODG
Holdout Distribution for HS
puzzles
Test generalization to new digits {5, 6}
Test generalization to new digits {3, 4, 5, 6}
Test generalization on new digits {3, 4, 5, 6} and new house
type (row)
|Larger TD|Weight Shar- ing|Repr & Arch|LR De- cay|SI|Median Peak OOD Acc|Mean Final OOD Acc|Median Final OOD Acc|Runs over 90% Peak Acc|Mean updates to 90% Peak Acc|
|---|---|---|---|---|---|---|---|---|---|
|✓|||✓||71.6%|50.7%|45.8%|20%|6316|
|✓|✓|Both|✓||73.7%|36.7%|36.3%|0%|N/A|
|✓||||✓|90.0%|70.2%|77.2%|40%|536|
|✓|✓|||✓|87.9%|75.5%|77.9%|40%|422|
|✓||Both||✓|98.0%|96.2%|95.2%|100%|368|
|✓|✓|Both||✓|98.7%|95.4%|95.8%|100%|206|
**5.9.2** **Digit and House Type OODG Results**
See Figure 4 for a full distribution (10 runs) for each experiment in the digit and house type OODG
dataset and Table 4 for accuracy by holdout type.
Figure 4: Distribution of actual training runs (10 for each experiment) for the digit and house type
holdout dataset
-----
Table 7: Accuracy per holdout dimension for four-digit and house type OODG experiment
|Larger TD|Weight Shar- ing|Rep & Arch|LR De- cay|SI|Median Digits Only Acc|Median House Type Only Acc|Median Digits and House Type Acc|
|---|---|---|---|---|---|---|---|
|✓|||✓||45.8%|2.4%|1.6%|
|✓|✓|Both|✓||35.4%|9.5%|3.0%|
|✓||||✓|92.5%|98.1%|90.5%|
|✓|✓|||✓|85.3%|79.2%|64.5%|
|✓||Both||✓|99.0%|99.2%|97.4%|
|✓|✓|Both||✓|89.3%|87.7%|74.9%|
-----
| [
"Mustafa, Abdool",
"Andrew, Nam",
"James, McClelland"
] | 2023-10-28T00:00:00 | null | false | 0 | 0 | null | https://openreview.net/forum?id=NdSGKZvX3z | null | null |
Contrastive finetuning of generative language models for informal premise selection | N/A | null | # Contrastive finetuning of generative language models for informal premise selection
Jesse Michael Han[1][,][2], Tao Xu[1], Stanislas Polu[1],
Arvind Neelakantan[1], and Alec Radford[1]
1 OpenAI
2 University of Pittsburgh
**Introduction**
Premise selection [6] is a classic problem in automated theorem proving (ATP) which asks
how to select the most relevant lemmas useful for proving a given theorem. As such, it is
firmly situated in the domain of formal mathematics and has long been a target for machine
learning methods in ATP [9, 3, 4, 5, 10, 1]. In this work, we consider informal premise selection,
where the statements of premises and theorems are in natural language and labels are given
by references to premises in ground truth informal proofs. The NaturalProofs dataset, recently
introduced in [12], frames informal premise selection as an information retrieval task.
We explore the applications of pretrained generative language models finetuned on a CLIPstyle [8] contrastive objective for retrieval over informal mathematics corpora. We show that
WebMath pretraining [7] leads to significant performance gain compared to pretraining only on
the same data as GPT-3 [2]. We achieve a new state-of-the-art on the NaturalProofs dataset [12],
improving on the previous state-of-the-art by up to 80% while using causal rather than bidirectional transformers and fewer parameters overall.
**Methodology**
We use decoder-only transformers similar to GPT-3 [2] with nlayers = 12, dmodel = 768,
_nhead = 12, and dhead = 64, totalling to 125M trainable parameters. After pre-training on_
the autoregressive language modelling task, we adapt our models for embedding-based retrieval
as follows. Given a query/document x, we compute an embedding **x ∈** R[d][model] for x by taking
**x to be the activations for the end-of-text (EOT) token. We finetune our models using an In-**
foNCE loss [11] exactly analogous to the objective used by CLIP [8]. That is, given a batch of N
b
positive (query, document) pairs, we train the encoder to maximize the cosine similarity of theb
_N positive examples while minimizing the cosine similarity of the N_ [2] _−_ _N negative examples._
At test time, we retrieve documents for a given query by maximizing the cosine similarity of
their embeddings. We test our methodology on the NaturalProofs dataset [12], which comprises
(theorem, premise) pairs extracted from proofs of theorems on ProofWiki. We use the same
theorem-wise train/test split in this work.
Unlike CLIP [8] or the BERT-based model studied in NaturalProofs [12], we use the same
encoder to embed both queries (theorems) and documents (premises). Since “X is useful to
prove Y ” is an asymmetric relation and we use a CLIP-style symmetric cross-entropy loss, the
encoder must be allowed to distinguish between theorems and references. We do this by simply
formatting the inputs to the transformer as
```
Theorem title: <title> <newline> Theorem statement: <statement>
Reference title: <title> <newline> Reference statement: <statement>.
```
-----
Contrastive finetuning of generative language models for informal premise selection Han et al.
During contrastive finetuning, we sample batches of N = 2048 pairs by first sampling N
theorems from the NaturalProofs train split, and then further sampling a positive reference
from the proof of each theorem in the batch. All our models are trained for approximately 7000
steps with the Adam optimizer, using 32 V100 GPUs.
We study three pretraining regimes for the NaturalProofs informal premise selection task:
- No pretraining. The model is randomly initialized and only learns theorem/premise
representations through contrastive training.
- GPT-3 style pretraining. The model is pretrained for 300B tokens on the same data
(a mix of filtered CommonCrawl, WebText, books, and Wikipedia) as GPT-3 [2].
- WebMath pretraining. Starting from the final snapshot of the previous model, we
train for another 72B tokens on the WebMath dataset [7], comprising a mix of math
arXiv, Python, Math StackExchange, Math Overflow, and PlanetMath.
We refer to our methodology for informal premise selection as contrastive theorem-premise
training (CTPT) and denote the three models above by ctpt-no-pretrain, ctpt-webtext,
and ctpt-webmath.
**Results and discussion**
**recall@10** **recall@100** **avgp@100** **full@100** **full@1K**
**BERT** **20.27** **59.44** **14.01** **27.39** **70.52**
**ctpt-no-pretrain** 23.76 54.01 11.91 23.75 56.32
**ctpt-webtext** 34.39 65.45 17.97 34.76 64.51
**ctpt-webmath** **36.92** **70.39** **21.53** **39.49** **73.52**
Table 1: Our models’ performance on the NaturalProofs test set alongside results from [12].
Our main results are displayed in Table 1. The model ctpt-webmath outperforms the
previous state-of-the-art on all metrics. Our models also utilize 43% fewer parameters since the
BERT-based model embeds theorems and references with separate copies of bert-base-cased
(110M params). It is possible that the webtext data contains ProofWiki, but WebMath does
not and we consider the significant performance gap between ctpt-webtext and ctpt-webmath
to be of primary interest. We speculate that the models studied in [12] are severely undertrained
due to using only 200 randomly sampled negatives for each positive example.
**Future directions** The results discussed in this extended abstract are preliminary, albeit
promising. We plan to ablate the effect of including various components of the pretraining
(e.g. Python vs informal math in WebMath, the necessity of webtext), as well as the zeroshot performance of our models (i.e. no contrastive finetuning) and potential methods for
unsupervised retrieval. We consider the applications of our methodology to premise selection
in the formal setting (e.g. inside an ITP or ATP) to also be a promising future direction.
**Acknowledgements** We thank Raul Puri, Harrison Edwards, Yuhuai Wu, Sean Welleck, and
Christian Szegedy for helpful discussions.
-----
Contrastive finetuning of generative language models for informal premise selection Han et al.
## References
[1] Kshitij Bansal, Sarah M. Loos, Markus N. Rabe, Christian Szegedy, and Stewart Wilcox. Holist:
An environment for machine learning of higher order logic theorem proving. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Ma_chine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Pro-_
_ceedings of Machine Learning Research, pages 454–463. PMLR, 2019._
[2] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel
Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler,
Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott
Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya
Sutskever, and Dario Amodei. Language models are few-shot learners. In Hugo Larochelle,
Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Ad_vances in Neural Information Processing Systems 33: Annual Conference on Neural Information_
_Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020._
[3] Geoffrey Irving, Christian Szegedy, Alexander A. Alemi, Niklas E´en, Fran¸cois Chollet, and Josef
Urban. Deepmath - deep sequence models for premise selection. In Daniel D. Lee, Masashi
Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, Advances in Neural
_Information Processing Systems 29: Annual Conference on Neural Information Processing Systems_
_2016, December 5-10, 2016, Barcelona, Spain, pages 2235–2243, 2016._
[4] Cezary Kaliszyk, Fran¸cois Chollet, and Christian Szegedy. Holstep: A machine learning dataset for
higher-order logic theorem proving. In 5th International Conference on Learning Representations,
_ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings. OpenReview.net,_
2017.
[5] Daniel K¨uhlwein, Jasmin Christian Blanchette, Cezary Kaliszyk, and Josef Urban. Mash: Machine
learning for sledgehammer. In Sandrine Blazy, Christine Paulin-Mohring, and David Pichardie,
editors, Interactive Theorem Proving - 4th International Conference, ITP 2013, Rennes, France,
_July 22-26, 2013. Proceedings, volume 7998 of Lecture Notes in Computer Science, pages 35–50._
Springer, 2013.
[6] Daniel K¨uhlwein, Twan van Laarhoven, Evgeni Tsivtsivadze, Josef Urban, and Tom Heskes.
Overview and evaluation of premise selection techniques for large theory mathematics. In Bernhard Gramlich, Dale Miller, and Uli Sattler, editors, Automated Reasoning - 6th International
_Joint Conference, IJCAR 2012, Manchester, UK, June 26-29, 2012. Proceedings, volume 7364 of_
_Lecture Notes in Computer Science, pages 378–392. Springer, 2012._
[7] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving.
_CoRR, abs/2009.03393, 2020._
[8] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal,
Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever.
Learning transferable visual models from natural language supervision. CoRR, abs/2103.00020,
2021.
[9] Josef Urban. MPTP 0.2: Design, implementation, and initial experiments. J. Autom. Reason.,
37(1-2):21–43, 2006.
[10] Josef Urban, Geoff Sutcliffe, Petr Pudl´ak, and Jir´ı Vyskocil. Malarea SG1- machine learner for
automated reasoning with semantic guidance. In Alessandro Armando, Peter Baumgartner, and
Gilles Dowek, editors, Automated Reasoning, 4th International Joint Conference, IJCAR 2008,
_Sydney, Australia, August 12-15, 2008, Proceedings, volume 5195 of Lecture Notes in Computer_
_Science, pages 441–456. Springer, 2008._
[11] A¨aron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive
predictive coding. CoRR, abs/1807.03748, 2018.
[12] Sean Welleck, Jiacheng Liu, Ronan Le Bras, Hannaneh Hajishirzi, Yejin Choi, and Kyunghyun
-----
Contrastive finetuning of generative language models for informal premise selection Han et al.
Cho. Naturalproofs: Mathematical theorem proving in natural language. CoRR, abs/2104.01112,
2021.
-----
# Contrastive finetuning of generative language models for informal premise selection
Jesse Michael Han, Tao Xu, Stanislas Polu, Arvind Neelakantan, and Alec Radford
-----
## Premise selection / relevance filtering
### ● Premise selection:
○ Classic problem in automated theorem proving
○ Can we select the most relevant lemmas for proving a given theorem?
○ Usually attacked with neural methods in the formal setting
-----
## Premise selection / relevance filtering
- Informal premise selection:
- Given a natural language theorem statement and a pool of
_natural language definitions/lemmas_
- Can we select the most relevant references for proving that
theorem?
- Pro: more in-domain for existing NLP techniques
Con: no algorithmic feedback from proof search
-----
## ProofWiki retrieval task
-----
-----
Contrastive finetuning of autoregressive decoder-only transformers
### ● Use the same technique as CLIP: contrastive loss using features from a decoder-only transformer
-----
-----
-----
-----
-----
theorems
premises
-----
-----
Generative pre-training is useful
-----
Generative pre-training is useful
-----
Generative pre-training is useful
-----
Domain-specific generative pre-training is also useful
-----
Domain-specific generative pre-training is also useful
-----
Domain-specific generative pre-training is also useful
-----
## Strategy
- Generatively pre-train a language model
- Take activations for the end-of-text (EOT) token as embedding for
theorems and references
- Finetune using the contrastive InfoNCE loss described above.
-----
GPT-3 models
(Brown et al 2020)
-----
GPT-3 models
(Brown et al 2020)
-----
## Use a single model to embed both theorems and references
-
-----
Training details
- Use batch size of N=2048
- Sample N theorems from train set, then sample a reference from
each of the theorems to create the batch
- This way we don’t contrast references from the same theorem
- Train for ~7000 steps using Adam, 0.2X the pre-training learning
rate, using 32 V100 GPUs
-----
How does generative pre-training affect retrieval performance?
-----
-----
-----
-----
Retrieval-augmented language modeling of proofs
- Can we improve informal (theorem, proof) perplexity when additionally
conditioned on retrieved informal premises?
- Can we improve formal (theorem, proof) perplexity when additionally
conditioned on retrieved informal premises?
- Can we improve formal theorem-proving pass-rate when conditioned on
informal premises (either per-theorem or per-proofstep?)
Re-ranking to address high-recall/low-precision behavior
- Zero/few-shot re-ranking using full-size GPT-3
- Zero/few-shot re-ranking using Webmath-finetuned GPT-3
Scale?
- Model size?
- Batch size --- against current wisdom doesn’t seem to help too much
-----
# Q & A
-----
| [
"Stanislas, Polu",
"Jesse Michael, Han",
"Tao, Xu",
"Arvind, Neelakantan",
"Alec, Radford"
] | 2021-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
ControlMath: Controllable Data Generation Promotes Math Generalist Models | Utilizing large language models (LLMs) for data augmentation has yielded encouraging results in mathematical reasoning. However, these approaches face constraints in problem diversity, potentially restricting them to in-domain/distribution data generation. To this end, we propose ControlMath, an iterative method involving an equation-generator module and two LLM-based agents. The module creates diverse equations, which the Problem-Crafter agent then transforms into math word problems. The Reverse-Agent filters and selects high-quality data, adhering to the "less is more" principle, achieving better results with fewer data points. This approach enables the generation of diverse math problems, not limited to specific domains or distributions. As a result, we collect ControlMathQA, which involves 190k math word problems. Extensive results prove that combining our dataset with in-domain datasets like GSM8K can help improve the model's mathematical ability to generalize, leading to improved performances both within and beyond specific domains. | ControlMath is proposed, an iterative method involving an equation-generator module and two LLM-based agents that enables the generation of diverse math problems, not limited to specific domains or distributions. | [
"Nuo, Chen",
"Ning, Wu",
"Jianhui, Chang",
"Jia, Li"
] | 2024-09-19T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.15376 | https://arxiv.org/abs/2409.15376 | https://www.semanticscholar.org/paper/e69191b77f00814c8f0579477e9dd188bae3eba5 |
|
CoqPilot, a plugin for LLM-based generation of proofs | We present CoqPilot, a VS Code extension designed to help automate the writing of Coq proofs. The plugin collects the parts of proofs marked with the admit tactic in a Coq file, i.e., proof holes, and combines LLMs along with non-machine-learning methods to generate proof candidates for the holes. Then, CoqPilot checks if each proof candidate solves the given subgoal and, if successful, replaces the hole with it. | null | ```
CoqPilot, a plugin for LLM-based generation of proofs
```
Andrei Kozyrev[13], Gleb Solovev[13], Nikita Khramov[13], and Anton Podkopaev[23]
```
[email protected]
```
1 JetBrains Research, Germany
2 JetBrains Research, the Netherlands
3 Constructor University Bremen, Germany
**Abstract**
We present CoqPilot, a VS Code extension designed to help automate the writing of
Coq proofs. The plugin collects the parts of proofs marked with the admit tactic in a
Coq file, i.e., proof holes, and combines LLMs along with non-machine-learning methods
to generate proof candidates for the holes. Then, CoqPilot checks if each proof candidate
solves the given subgoal and, if successful, replaces the hole with it.
The focus of CoqPilot is twofold. Firstly, we want to provide a zero-setup experience
for end-users. Secondly, we want to provide a platform for LLM-based experiments with
generation of Coq proofs.
[Code available at https://github.com/JetBrains-Research/coqpilot](https://github.com/JetBrains-Research/coqpilot)
Large Language Models (LLMs) have recently demonstrated their remarkable ability to
generate code in a variety of programming languages. A natural second frontier is to use LLMs
for generating code for proof assistants. There have already been works dedicated to using
LLMs for theorem proof generating [5, 7, 13, 16, 14, 4]. Noticeable research by OpenAI [13] has
shown how transformers could be used to successfully generate formal languages. Most of the
recent works focused on one step at a time generation, followed by a proof search [7]. However,
attempts to generate the complete proof using LLMs are also present [5]. Extensive research
was conducted on how to improve the model itself and achieve better generation metrics.
In this particular work, we focus on maxing out the generation capabilities, regardless of
the particular model. Main contributions include (i) studying possible external enhancements
to the process of generating Coq code with non-fine-tuned models and (ii) creating an applied
tool for convenient generation of Coq code using LLMs, as well as facilitating easy conduction
of experiments and research based on it.
In contrast to other works and tools, we aim to create a zero-setup experience for the user.
```
CoqPilot is our VS Code plugin [8], which needs just a particular API key, assigned in the
```
settings to start running. Projects like ASTactic [16], TacTok [4], and ProverBot9001 [14] learn
predictive models, yet lack an interaction interface for the end users. Proofster [2] provides
a web interface for Coq code generation, yet this interface is not integrated into the codewriting process. CoqPilot integrates directly into the currently popular IDE choice for Coq –
VSCode. Compared with Tactician [1], we provide a built-in opportunity to experiment with
many general-purpose LLMs. Users can easily configure many model parameters through plugin
settings and combine different approaches to boost performance. Moreover, approaches like
Tactician and CoqHammer [17], called via a special tactic, are easily integrated into CoqPilot.
A common setting in which CoqPilot works is as follows: an open Coq file with a number of
successfully proven theorems, and several goals, containing admit. In such a setting, CoqPilot
uses already proven theorems as a few-shot prompt for the LLM and tries to retrieve completion
for all the admitted goals independently.
Any admitted goal in Coq could be represented as a standalone theorem, using the hypotheses and the conclusion at the point of admit. We formulate the problem for the LLM as
-----
`CoqPilot, a plugin for LLM-based generation of proofs` A. Kozyrev, G. Solovev, N. Khramov, A. Podkopaev
follows: given the theorem’s statement, generate the proof for it. LLMs tend to perform better
on few-shot prompting[1] examples [9], compared to zero-shot. Theorems from the active file
are collected, and a few are used as prompts for the chosen LLMs. The selection of theorems
is based on certain metrics, such as their distance from the target theorem that needs to be
solved. We often cannot use the complete list of theorems, as the LLM’s context is limited.
The main strength of such formal methods like Coq is the ability to automatically perform
type-checking. In tandem with the generative capabilities of LLMs, it allows us to check the
correctness of the generated code. To allow automatic proof checking, we implemented a higher
level module, wrapping Coq Language Server[2] and providing useful abstractions over it such as
the one to check if proof is valid in a given environment. We used the particular Coq language
server implementation [3] and from now onwards will refer to it as Coq-LSP. For each target
theorem, we generate n potential proofs, check all of them, and in case of success, return it as
an answer. In case of failure, we launch a process aiming to fix the failing proof. This process
is similar to the one used in Copra [15]. Given the depth d and the number of completions
requested each time m, for each incorrect proof, we start a multi-round communication process
with an LLM. We send the compilation error along with the special prompt to the LLM and
ask them to fix it. If the proof is still not accepted by Coq afterward, we repeat the process,
but at a maximum of d − 1 times.
We also aimed to create a benchmarking environment to evaluate how well different LLMs
and other techniques can generate Coq code. Due to specifics of Coq, Coq-LSP is not Coq
version agnostic. Hence, neither are we. Moreover, Coq-LSP supports Coq versions starting
from 8.15/8.17. As a consequence, we could not use CoqGym [16] as a dataset provider, as it
contains projects requiring old Coq versions. Currently maintained Coq version by us is 8.19
as the latest one available.
We have conducted an experiment with a set of theorems from the IMM project [12]. IMM,
which supports the desired Coq 8.19, is of particular interest to our lab. LLMs we have picked
for evaluation include GPT models [10, 11], LLaMA [6], and Anthropic Claude 2.1. Moreover,
we tried firstorder reasoning tactic firstorder auto with * as a baseline. From each model,
we attempted to sample a correct proof for the theorem up to 20 times. In this experiment,
we did not try to do proof fixing. To pick the dataset, we took all proven theorems from the
IMM project. Then we have chosen only theorems shorter than 20 tactics (83% of the original
amount). We divided theorems by their human-written proof lengths into three groups: three
to four, five to eight, and nine to twenty lines long. Then, we randomly chose 45 theorems from
the dataset with group sizes proportional to the initial distribution.
Reference proof length ⩽ 4 5 – 8 9 – 20 Total
Group size 20 14 11 45
OpenAI GPT-3.5 35% 7% 18% 22%
OpenAI GPT-4 55% 7% 9% 28%
LLaMA-2 13B Chat 5% 0% 0% 2%
Anthropic Claude 2.1 25% 7% 0% 13%
Firstorder 25% 7% 0% 13%
All methods 60% 21% 18% 38%
Table 1: Benchmarking results
1During few-shot prompting, several concrete examples of how the task is to be solved are provided. Zeroshot prompting implies the system prompt is used without examples.
[2Language Server Protocol: https://microsoft.github.io/language-server-protocol/](https://microsoft.github.io/language-server-protocol/)
-----
`CoqPilot, a plugin for LLM-based generation of proofs` A. Kozyrev, G. Solovev, N. Khramov, A. Podkopaev
A notable result is that among each group, the collectible effort of all models is stronger than
any individual one. It shows that the approach of CoqPilot to using a sequence of different
models altogether is promising. The benchmarking tool and the report on experiments are
published in the repository.[3]
## References
[1] Lasse Blaauwbroek, Josef Urban, and Herman Geuvers. The Tactician: A seamless, interactive
[tactic learner and prover for coq, 2020. https://arxiv.org/abs/2008.00120.](https://arxiv.org/abs/2008.00120)
[2] Agrawal et al. Proofster: Automated formal verification, 2023.
[3] Emilio Jes´us Gallego Arias et al. Visual studio code extension and language server protocol for
coq, 2022.
[4] Emily First, Yuriy Brun, and Arjun Guha. TacTok: semantics-aware proof synthesis, 11 2020.
[5] Emily First, Markus N. Rabe, Talia Ringer, and Yuriy Brun. Baldur: Whole-proof generation and
[repair with large language models, 2023. https://arxiv.org/abs/2303.04910.](https://arxiv.org/abs/2303.04910)
[[6] Meta GenAI. LLaMA 2: Open foundation and fine-tuned chat models, 2023. https://arxiv.](https://arxiv.org/abs/2307.09288)
```
org/abs/2307.09288.
```
[7] Albert Q. Jiang, Wenda Li, Szymon Tworkowski, Konrad Czechowski, Tomasz Odrzyg´o´zd´z, Piotr
Mi lo´s, Yuhuai Wu, and Mateja Jamnik. Thor: Wielding hammers to integrate language models
[and automated theorem provers, 2022. https://arxiv.org/abs/2205.10893.](https://arxiv.org/abs/2205.10893)
[8] Andrei Kozyrev, Gleb Solovev, Nikita Khramov, and Anton Podkopaev. CoqPilot, a visual studio
code extension, designed to help automate writing of coq proofs, using large language models, 2023.
```
https://marketplace.visualstudio.com/items?itemName=JetBrains-Research.coqpilot.
```
[9] Huan Ma, Changqing Zhang, Yatao Bian, Lemao Liu, Zhirui Zhang, Peilin Zhao, Shu Zhang,
Huazhu Fu, Qinghua Hu, and Bingzhe Wu. Fairness-guided few-shot prompting for large language
[models, 2023. https://arxiv.org/abs/2303.13217.](https://arxiv.org/abs/2303.13217)
[[10] OpenAI. Language models are few-shot learners, 2020. https://arxiv.org/abs/2005.14165.](https://arxiv.org/abs/2005.14165)
[[11] OpenAI. GPT-4 technical report, 2024. https://arxiv.org/abs/2303.08774.](https://arxiv.org/abs/2303.08774)
[12] Anton Podkopaev, Ori Lahav, and Viktor Vafeiadis et al. Intermediate memory model and com[pilation correctness proofs for it, 2019. https://github.com/weakmemory/imm.](https://github.com/weakmemory/imm)
[13] Stanislas Polu and Ilya Sutskever. Generative language modeling for automated theorem proving,
[2020. https://arxiv.org/abs/2009.03393v1.](https://arxiv.org/abs/2009.03393v1)
[14] Alex Sanchez-Stern, Yousef Alhessi, Lawrence Saul, and Sorin Lerner. Generating correctness
[proofs with neural networks, 2020. https://arxiv.org/abs/1907.07794.](https://arxiv.org/abs/1907.07794)
[15] Amitayush Thakur, George Tsoukalas, Yeming Wen, Jimmy Xin, and Swarat Chaudhuri. An in[context learning agent for formal theorem-proving, 2024. https://arxiv.org/abs/2310.04353.](https://arxiv.org/abs/2310.04353)
[16] Kaiyu Yang and Jia Deng. Learning to prove theorems via interacting with proof assistants, 2019.
```
https://arxiv.org/abs/1905.09381.
```
[17] Czajka L and Kaliszyk C. Hammer for coq: Automation for dependent type theory, 2018.
[3https://github.com/JetBrains-Research/coqpilot/tree/main/etc/docs/benchmark](https://github.com/JetBrains-Research/coqpilot/tree/main/etc/docs/benchmark)
-----
| [
"Andrei, Kozyrev",
"Gleb, Solovev",
"Nikita, Khramov",
"Anton, Podkopaev"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Counterfactual PPO Enhanced Shared Reflector for LLM-based Multi-agent Collaboration | Benefiting from the powerful language expression and planning capabilities of Large Language Models (LLMs), LLM-based autonomous agents achieve promising performance in various downstream tasks. Recently, based on the development of single-agent systems, researchers propose to construct LLM-based multi-agent systems to tackle more complicated tasks. In this paper, we propose a novel framework, named COPPER, to enhance the collaboration ability of multi-agent systems through learnable self-reflection mechanism. To improve the quality of reflections, we propose to fine-tune a shared reflector, which automatically tunes the prompts of actor models using our counterfactual PPO mechanism. On the one hand, we propose counterfactual rewards to assess the contribution of a single agent’s reflection within the system, alleviating the credit assignment problem. On the other hand, we propose to train a shared reflector, which enables the reflector to personalize generated reflections according to agent roles, while reducing the computational resource requirements and improving training stability. We conduct experiments on three datasets to evaluate the performance of multi-agent systems in multi-hop question answering, mathematics, and chess scenarios. Experimental results show that COPPER possesses stronger reflection capabilities and exhibits excellent generalization performance across different actor models. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/93147 | null | null |
CreDes: Causal Reasoning Enhancement and Dual-End Searching for Solving Long-Range Reasoning Problems using LLMs | Large language models (LLMs) have demonstrated limitations in handling combinatorial optimization problems involving long-range reasoning, partially due to causal hallucinations and huge search space. As for causal hallucinations, i.e., the inconsistency between reasoning and corresponding state transition, this paper introduces the Causal Relationship Enhancement (CRE) mechanism combining cause-effect interventions and the Individual Treatment Effect (ITE) to guarantee the solid causal rightness between each step of reasoning and state transition. As for the long causal range and huge search space limiting the performances of existing models featuring single-direction search, a Dual-End Searching (DES) approach is proposed to seek solutions by simultaneously starting from both the initial and goal states on the causal probability tree. By integrating CRE and DES (CreDes), our model has realized simultaneous multi-step reasoning, circumventing the inefficiencies from cascading multiple one-step reasoning like the Chain-of-Thought (CoT). Experiments demonstrate that CreDes significantly outperforms existing State-Of-The-Art (SOTA) solutions in long-range reasoning tasks in terms of both accuracy and time efficiency. | By integrating CRE and DES (CreDes), the model has realized simultaneous multi-step reasoning, circumventing the inefficiencies from cascading multiple one-step reasoning like the Chain-of-Thought (CoT). | [
"Hao, Liu",
"Xiao, Zhang",
"Kangsheng, Wang",
"Tianyu, Hu",
"Songde, Han",
"Huimin, Ma"
] | 2024-10-02T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2410.01696 | https://arxiv.org/abs/2410.01696 | https://www.semanticscholar.org/paper/39bef6bc1f298eb0a9a46f7a9ce108eda27170e6 |
|
Creating a Dataset of Problems for Learning in Dependent Higher-Order Logic | N/A | null | # Creating a Dataset of Problems for Learning in Dependent Higher-Order Logic
Daniel Ranalter and Cezary Kaliszyk
Department of Computer Science, University of Innsbruck, Innsbruck, Austria
```
{d.ranalter,cezary.kaliszyk}@gmail.com
```
**Introduction** Although Automated Theorem Proving using Higher-Order Logic has produced several powerful systems, including well-known names like Vampire [6], E [11], and Zipperposition [1], incorporating dependent types into it poses a challenge. While there exist some
interactive systems utilizing dependent types [5, 4, 2], in order to keep their type-checking decidable, they differentiate between judgmental and provable equality. PVS [10] is an exception
to this but supports predicate subtyping and not dependent types in general.
Rothgang et al. introduced Dependently-Typed Higher-Order Logic [9] (DHOL) and developed a translation to HOL as a first step into making viable automated theorem provers for
dependently typed logics. Niederhauser et al. then extended the theorem prover Lash [7] as
the first instance of a native ATP system able to deal with problems of this kind. We plan to
further this work and hope to make DHOL an attractive base for systems in the future.
To develop modern AI-guided provers for DHOL, we are creating a problem set sizable
enough to support meaningful benchmarks and learning data. This will allow us to check
implementations and compare them against others, providing fertile ground on which DHOL
can grow.
**DHOL** DHOL is an extension of the Simple Type Theory (STT) as introduced by Church [3].
More precisely, we use a classical formulation of STT that includes a base type for booleans,
primitive logical connectives for implication ⇒, and typed (term-)equality =A as well as the
constant function False ⊥. Rothgang et al. created DHOL [9] by changing the simple function
constructor A → _B to the dependent version Πx : A.B and allowing for the declaration and_
instantiation of dependent base types. The extension results in undecidable type-checking, as
now the type equality judgment Γ ⊢ _as1...sn ≡_ _at1...tn requires proofs for the equivalence_
(and well-typedness) of all si and ti. The classical nature and coinciding equalities lead to a
logic much closer to those of established ATPs. Type-checking needs only happen once, as all
inference rules preserve the well-typedness of the problem. Indeed, Rothgang et al. [9] show
for their translation that, if the original DHOL problem is well-typed, a valid proof of the
translated problem can be tied to a valid proof in DHOL.
**Extension of TPTP** We use the TPTP syntax for our problems. In particular, TPTP’s
support of quantifying over higher-order typed variables by using the !>[X:T]: in the TH1
form will be reused as the binder for quantifying over dependent types by allowing which forms
```
T might take. Part of this work is the specification of a proper TPTP form for DHOL, which
```
is syntactically straightforward but requires consideration for things like type-checking and
semantics. For example, Niederhauser et al. define two new SZS ontology statuses [12] related
to type-checking in [7]. A DHOL problem is therefore in fact a (dependent) conjunction of two
problems, as the proof of the conjecture itself is not meaningful without proof of the problem’s
well-typedness.
-----
Dataset Creation for DHOL Ranalter, Kaliszyk
```
Lemma L_R_neq n m (p : t n) (q : t m) : L m p <> R n q.
```
=
```
thf(l_r_neq, conjecture, ![M:nat, N:nat]:
(![P:(fin @ N), Q:(fin @ M)]: ((l @ N @ M @ P) = (r @ M @ N @ Q)))).b
```
Figure 1: The original Rocq problem next to the TPTP conjecture.
**Benchmark Creation** The creation of the problem set is ongoing. Currently, the set includes
problems related to fixed-length lists/vectors, fixed-size matrices, and fixed-size sets. Especially
problems of the last category were translated manually from dependently typed theories in the
standard library of the Rocq prover [4]. In addition to the already mentioned categories of
problems, there are also several examples for sanity-checking type-inference, type-constraint
generation, and/or direct type-checking of a problem. These are artificial problems that do
not state any mathematical concepts. For example, a problem might establish instantiations of
generic dependent types as well as some corresponding terms to finally conjecture the equality
of those terms. While those equalities are meaningless and indeed usually false this constitutes
a minimal example of creating typing constraints between two sides of an equality.
The translation of problems between different proof systems is an interesting AI task and is of
major value for the systems themselves as dependently-typed reasoning can then be shared and
optimized independently of the systems. Our initial manual experiments can serve as a training
ground for future automated experiments. Instances of such problems were important enough
to find their way into standard libraries or were previously under active investigation. This
elevates a benchmark set from a collection of toy examples to a more meaningful representation
of what a system has to be able to achieve. Another advantage is the variety of problems that
come with it. The same concept can be expressed in different ways by different systems, of
which some differences might translate over into our representation. It should also yield more
variety in difficulty and possibly even in domains compared to handcrafting them. As such, a
way to automatically translate conjectures and axioms is desirable, increasingly so when the
number of potential problems of one source is large. However, modern proof assistants usually
provide several convenience features, such as implicit arguments and types that are inferred
from the “real” arguments. TPTP takes a more verbose approach and lists all terms and types
for a function explicitly. Figure 1 illustrates that translation from one to the other is not trivial
because some of the types are missing. Discrepancies such as these make a straight-forward
translation impossible and a sophisticated program will have to be developed to rise to the task.
While core functionality, such as the mentioned inference of implicit arguments, can be shared,
it will be necessary to have a separate parser for all considered systems. Currently, there are
plans to look further into the Rocq theorem prover [4] and expand to the Mizar Mathematical
Library[1], Agda [2], and Lean [5].
**Future Work** The creation of this benchmark will facilitate future developments in DHOL,
its automation, and proof guidance for LashDHOL [7]. Lash already includes a strategy language,
and using strategy invention for DHOL will only be possible with a sufficiently large and diverse
set of problems. Similarly, several of the techniques developed in the system have alternatives
(for example, there are two ways how choice can be specified and implemented [8]), and choosing
the right approaches will require experimenting with them. With this, it would also be possible
to develop strategies that implement internal guidance based on machine learning. One of the
1http://mizar.org/library/
-----
Dataset Creation for DHOL Ranalter, Kaliszyk
major weaknesses of the tableaux architectures is enumerative approach to universal quantifies
and again a dataset would allow optimizing such term enumeration schemes.
## References
[1] Alexander Bentkamp, Jasmin Blanchette, Sophie Tourret, and Petar Vukmirovic. Superposition
for full higher-order logic. In André Platzer and Geoff Sutcliffe, editors, Proc. 28th International
_Conference on Automated Deduction, volume 12699 of LNCS, pages 396–412. Springer, 2021._
[2] Ana Bove, Peter Dybjer, and Ulf Norell. A brief overview of Agda – a functional language with
dependent types. In Stefan Berghofer, Tobias Nipkow, Christian Urban, and Makarius Wenzel,
editors, Proc. 22nd International Conference on Theorem Proving in Higher Order Logics, volume
5674 of LNCS, pages 73–78, 2009.
[3] Alonzo Church. A formulation of the simple theory of types. Journal of Symbolic Logic, 5(2):56–68,
1940.
[4] Coq Development Team. The Coq Reference Manual, Release 8.18.0, 2023.
[5] Leonardo de Moura and Sebastian Ullrich. The Lean 4 theorem prover and programming language.
In André Platzer and Geoff Sutcliffe, editors, Proc. 28th International Conference on Automated
_Deduction, volume 12699 of LNAI, pages 625–635, 2021._
[6] Laura Kovács and Andrei Voronkov. First-order theorem proving and vampire. In Natasha Sharygina and Helmut Veith, editors, Computer Aided Verification - 25th International Conference,
_CAV 2013, Saint Petersburg, Russia, July 13-19, 2013. Proceedings, volume 8044 of Lecture Notes_
_in Computer Science, pages 1–35. Springer, 2013._
[7] Johannes Niederhauser, Chad E. Brown, and Cezary Kaliszyk. Tableaux for automated rea[soning in dependently-typed higher-order logic. Submitted, http://cl-informatik.uibk.ac.at/cek/](http://cl-informatik.uibk.ac.at/cek/submitted/jncbck.pdf)
[submitted/jncbck.pdf.](http://cl-informatik.uibk.ac.at/cek/submitted/jncbck.pdf)
[8] Daniel Ranalter, Chad E. Brown, and Cezary Kaliszyk. Experiments with choice in dependently[typed higher-order logic. Submitted, http://cl-informatik.uibk.ac.at/cek/submitted/drcbck.pdf.](http://cl-informatik.uibk.ac.at/cek/submitted/drcbck.pdf)
[9] Colin Rothgang, Florian Rabe, and Christoph Benzmüller. Theorem proving in dependentlytyped higher-order logic. In Brigitte Pientka and Cesare Tinelli, editors, Proc. 29th International
_Conference on Automated Deduction, volume 14132 of LNAI, pages 438–455, 2023._
[10] John Rushby, Sam Owre, and Natarajan Shankar. Subtypes for specifications: Predicate subtyping
in PVS. IEEE Transactions on Software Engineering, 24(9):709–720, 1998.
[11] Stephan Schulz, Simon Cruanes, and Petar Vukmirovic. Faster, higher, stronger: E 2.3. In Pascal
Fontaine, editor, Automated Deduction - CADE 27 - 27th International Conference on Automated
_Deduction, Natal, Brazil, August 27-30, 2019, Proceedings, volume 11716 of Lecture Notes in_
_Computer Science, pages 495–507. Springer, 2019._
[12] Geoff Sutcliffe. The SZS ontologies for automated reasoning software. In Piotr Rudnicki, Geoff
Sutcliffe, Boris Konev, Renate A. Schmidt, and Stephan Schulz, editors, Proc. 15th International
_Conference on Logic for Programming, Artificial Intelligence and Reasoning, volume 5330 of LNCS,_
2008.
-----
| [
"Cezary, Kaliszyk",
"Daniel, Ranalter"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
DOP: Diagnostic-Oriented Prompting for Large Language Models in Mathematical Correction | Math world problems correction(MWPC) is a novel task dedicated to rectifying reasoning errors in the process of solving mathematical problems. In this paper, leveraging the advancements in large language models (LLMs), we address two key objectives:(1) Distinguishing between mathematical reasoning and error correction; (2) Exploring strategies to enhance the error correction capabilities of LLMs in mathematics to solve MWPC task. We noticed that, in real-time education,assisting students in recognizing their mistakes is more crucial than simply providing correct answers. However, current research tends to prioritize obtaining accurate solutions to math problems rather than correcting potentially incorrect ones. Therefore, we modify the research paradigm, demonstrating that improving mathematical reasoning abilities does not equate to mastery in error correction. Meanwhile, we propose a novel method called diagnostic-oriented promping(DOP) aimed at facilitating LLMs to excel in error correction. In experiments, DOP has shown outstanding performance, highlighting its significant impact. We argue that in mathematical education, the demand for outstanding correctors surpasses that for proficient reasoners. Codes and data are available on https://github.com/ChenhaoEcnuCS/Reason-Correct. | null | ## DOP: Diagnostic-Oriented Prompting for Large Language Models in Mathematical Correction
**Hao Chen,Biaojie Zeng, Xin Lin, Liang He and Aimin Zhou**
East China Normal University
{10205102429,51265901131}@stu.ecnu.edu.cn
{xlin,lhe,amzhou}@cs.ecnu.edu.cn
**Abstract**
**Math world problems correction(MWPC) is**
a novel task dedicated to rectifying reasoning
errors in the process of solving mathematical
problems. In this paper, leveraging the advancements in large language models (LLMs), we
address two key objectives:(1) Distinguishing
between mathematical reasoning and error
correction; (2) Exploring strategies to enhance
the error correction capabilities of LLMs in
mathematics to solve MWPC task. We noticed
that, in real-time education,assisting students in
recognizing their mistakes is more crucial than
simply providing correct answers. However,
current research tends to prioritize obtaining
accurate solutions to math problems rather
than correcting potentially incorrect ones.
Therefore, we modify the research paradigm,
demonstrating that improving mathematical
reasoning abilities does not equate to mastery
in error correction. Meanwhile, we propose
a novel method called diagnostic-oriented
**promping(DOP) aimed at facilitating LLMs to**
excel in error correction. In experiments, DOP
has shown outstanding performance, highlighting its significant impact. We argue that in
mathematical education, the demand for outstanding correctors surpasses that for proficient
reasoners. Codes and data are available on
[https://github.com/ChenhaoEcnuCS/Reason-](https://github.com/ChenhaoEcnuCS/Reason-Correct)
[Correct.](https://github.com/ChenhaoEcnuCS/Reason-Correct)
others, have sparked innovative approaches across
diverse domains of study.
In mathematics domain, numerous studies(Wei
et al., 2022; Kojima et al., 2022; Wang et al.,
2023a,c; Zhang et al., 2023; An et al., 2023; Liu
et al., 2023b; Liu and Low, 2023; Yu et al., 2023;
Luo et al., 2023) have focused on the task of solving math world problems(MWPs). Some have employed diverse prompting strategies (Wei et al.,
2022; Kojima et al., 2022; Wang et al., 2023a,c;
Zhang et al., 2023) to enhance the reasoning capabilities of LLMs, while others (An et al., 2023;
Liu et al., 2023b; Liu and Low, 2023; Yu et al.,
2023; Luo et al., 2023) have fine-tuned models for
mathematical tasks using domain-specific corpora.
However, we observe that most of these approaches primarily focus on achieving accuracy
in solving MWPs. We often overlook the key point:
merely enhancing the ability of a large language
model to solve MWPs correctly falls short in mathematics pedagogy scenarios.
In real life, good students may be good at solving MWPs, but struggle to mentor their peers. Conversely, parents who may encounter difficulties in
solving MWPs themselves can effectively coach
their children using educational resources. This
observation underscores the importance of focus**ing not just on a model’s ability to solve prob-**
**lems, but also on its capacity to correct errors**
**and provide guidance. With LLMs, an significant**
objective is instructing them to assist students in
identifying and correcting their mistakes.
We first distinguish the concept of reasoning and
correcting. As shown in Figure 1, in educational
scenarios, the capacity for reasoning aids students
in providing correct answers, whereas error correction empowers teachers to guide students through
the process of identifying and rectifying mistakes
in their responses. Our research mainly discussed
those abilities in mathematics domain.
Therefore, we begin with a research question: is
**Introduction**
**“Give a man a fish and you feed him**
**for a day; Teach a man to fish and you**
**feed him for a lifetime.”**
—-Huainanzi
In recent years, the rapid advancement of large
language models(LLMs)(Zhao et al., 2023) has
profoundly reshaped the landscape of artificial intelligence research. The remarkable capabilities
exhibited by prominent models like GPT-4 (OpenAI, 2023), LLama2 (Touvron et al., 2023), among
-----
in the solution. Conversely, it may inaccurately
**Reasoning** **Answer Set**
**Question** The time: 200/60= 3 seconds.
A train is 200 metres long and travelling at I think the time that the train take
60 metres per second, How long did it was:220/60= 4 seconds.
take it to cross a 220 metre long bridge?
Total length: 200+220 = 420 metres.
So the time: 420/60= 7 seconds.
**Correcting** **Correction**
A train is 200 metres long and travelling at In your solution, the equation 200/60 = 3 is wrong. You
60 metres per second, How long did it may have some misunderstanding about the total length of
take it to cross a 220 metre long bridge? the train travelled in this question. In this question, the total
**length travelled by the train should be the length of the**
**train plus the length of the bridge, not just the length of**
The time: 200/60= 3 seconds. **the bridge.**
Figure 1: Examples of reasoning and correcting.
**the ability of a language model to reason and to** - We propose Diagnostic-Oriented Prompt**correct errors equivalent?** **ing(DOP), a novel and effective method to**
In some cases, an LLM may correctly solve a enhance LLMs’ correcting abilities based on
mathematical problem but fail to address errors modern teaching resources.
answer a math question but successfully rectify
solution errors based on adequate contextual cues.
Based on the observation, we hypothesise that
the reasoning and correcting capabilities are not
fully equivalent. To demonstrate this, we introduce math world problems correction(MWPC),
a novel task focusing on the correction abilities of
LLMs. We also conduct a series of experiments on
MWPC task to prove our hypothesis, which will be
described in Section 3.
Then, we further raise a question: How can we
**enhance the correcting abilities of LLMs?**
In modern teaching materials, both concise and
detailed answers are commonly provided alongside
the questions. Since we have demonstrated that the
reasoning and correcting abilities were not fully
equivalent, we proposed a novel method, called
**Diagnostic-Oriented Prompting(DOP), leverag-**
ing available resources to enhance LLMs’ proficiency as correctors in mathematical education.
Generally speaking, our contributions can be
concluded as follows.
- We modify the research paradigm, showing
that in most LLMs, the abilities to reason and
correct in MWPs are not fully equivalent, emphasizing that merely enhancing reasoning is
insufficient.
- To the best of our knowledge, we are the first
to propose MWPC task, which is more relevant and beneficial in mathematical education
settings.
**2** **Background and Related Work**
**2.1** **Mathematical Reasoning Through LLMs**
There are many ways to improve the performance
of LLMs on mathematical reasoning tasks by
prompting them.
The method of chain-of-thought(COT) prompting (Wei et al., 2022; Kojima et al., 2022) can be
used in mathematical domain and improves the accuracy. (Wang et al., 2023c) notices that a complex
reasoning problem is usually thought of in a number of different ways and used majority voting to
improve the process of COT. (Zhou et al., 2023;
Wang et al., 2023a) endeavour to decompose complex problems into multiple simple steps, guiding
the large language model to solve mathematical
problems step by step. (Liu et al., 2023a; Imani
et al., 2023; Gou et al., 2023) mainly focus on using
external tools like Python executor, mathematical
calculator, and so on, to reduce the probability of
error in LLMs and improve the reliability of LLMs
in mathematical reasoning tasks.
In order to specifically enhance and utilise the
mathematical reasoning ability of the model, some
researchers use fine-tuning or instruction-tuning
methods. (Ho et al., 2023) proposed fine-tuned
COT, which generates reasoning samples from
large teacher model to fine-tune smaller model.
(An et al., 2023) utilised a corpus of mathematical
reasoning containing error samples and the error
correction process to fine-tune small models like
LLama-2(Touvron et al., 2023) and MetaMath(Yu
et al., 2023). (Liu and Low, 2023) introduced Goat,
-----
**Output**
+
**Student’s Solutions**
**Math Question**
**Math Question**
**MWPS**
**Solutions Content**
**Correction Content**
**MWPC**
**DOP**
**Better Correction Content**
**Language Model**
**②**
**Math Question**
**Knowledge Point**
**Prompt&Output**
**Standard Solution**
**E-learning Information**
**...**
**Student’s solutions**
**Output**
Figure 2: The overall framework of our research. In the first stage, we conduct both MWPS and MWPC tasks on
our candidate models and prove that mathematical reasoning and correcting capabilities are not fully equivalent.
Then, in the second stage, we conduct our strategy called Diagnostic-Oriented Prompting(DOP), enabling our
candidate models to enhance their correcting abilities in mathematical domain.
which is a fine-tuned LLama model and can significantly outperforms GPT-4 (OpenAI, 2023) on a
wide range of arithmetic tasks.
**2.2** **Corrrection Throught LLMs**
Meanwhile, some research spotlights the correction
capabilities of LLMs.
(Madaan et al., 2023) proposed self-refine,
which is a novel approach that allows LLMs to
iteratively provide feedback and refine their own
outputs. (Pan et al., 2023) summarised a series of
methods using feedback either produced by LLMs
themselves or some external systems, to rectify
those flaws. Self-correction effectively mitigates
hallucination (Ji et al., 2023) in LLMs. However,
(Huang et al., 2023) pointed out that without external feedback, LLMs still connot self-correct their
own reasoning process, including mathematical reasoning process. According to (Stechly et al., 2023;
Valmeekam et al., 2023a,b; Huang et al., 2023),
when correcting something wrong, especially those
errors produced by LLMs themselves, external information is indispensable.
There are also some studies centering on error
correction task. (Wang et al., 2023b) used LLMs
to remediate students’ mathematical mistakes step
by step. (Tang et al., 2023; Song et al., 2023; Du
et al., 2023; Kwon et al., 2023) focused on grammatical error correction(GEC) task, utilising LLMs
to solve GEC problems in monolingual and multi
lingual scenarios. (MacNeil et al., 2023; Leinonen
et al., 2023) researched the abilities of LLMs to
correct errors in code, which is beneficial to computer science(CS) education. Unfortunately, there
is still very little research on error correction to the
mathematical reasoning process.
**2.3** **AI For Mathematical Education**
Artificial Intelligence(AI) strongly promotes the
development of mathematical education.
Since LLMs were put into use, (Wang et al.,
2023b) simulated the process of human tutor, determining different strategy to address students’
reasoning mistakes in mathematics. (Wu et al.,
2023) studied mathematical education on conic
sections in Chinese senior high school education
using LLMs like GPT-4(OpenAI, 2023) and ChatGLM(Du et al., 2022). (Long et al., 2023) evaluated ChatGPT on generating pre-university math
questions, providing insights for teachers and researchers in utilizing LLMs in mathematical education. The research above reveals that making
LLMs to be good teachers is a following trend for
AI in mathematical education.
**3** **Methodology**
In this section, we will address the focus and describe the research methodology we used in this
study.
Firstly, we conducted experiments to validate
-----
**differences between reasoning and correcting**
in mathematics domain. Continue with the process, we proposed Diagnostic-Oriented Prompt**ing(DOP) for correction capabilities. Figure 2**
shows the overall framework.
**3.1** **Validating Differences between Reasoning**
**and Correcting**
Initially, we conducted comparative experiments
to validate the observation that the reasoning and
error correction abilities of LLMs are not fully correlated.
We established several pivotal elements within
this scenario. Firstly, our candidate models are
represented as an expression f **(·), with the output**
sequence denoted as y. In mathematical reasoning
task, the input is a math question, denoted as Q.
We provided the model with a prompt containing
the question, denoted as Pr(Q), and obtained an
output yr, which means that:
**_yr = f_** **(Pr(Q))** (1)
Similarly, in the MWPC task, the input consists
of a math question Q and its corresponding incorrect solution W . We provided the model with
a prompt containing both elements, denoted as
**_Pc(Q, W ), and obtained an output yc, indicating_**
that:
**_yc = f_** **(Pc(Q, W ))** (2)
In the next step, considering the question Q, we
examine standard answer, represented as A. To
ascertain the model’s ability to solve the question,
we employ an extraction function, denoted as Nr
in the reasoning task and Nc in the correction task,
to extract the final numeric answer from the natural
language. Ultimately, we defined two states, Sr
and Sc, to indicate whether the model had successfully solved the task, which means that:
LLMs as our candidate models to perform both
reasoning and correction tasks. Details about these
selected models will be provided in Section 4.
**3.2** **Diagnostic-Oriented Prompting(DOP)**
In our previous experiments, we observed that
while LLMs may not entirely solve problems, they
can generate correction processes. This parallels
real-time education scenarios where teachers or parents, though unable to solve problems themselves,
can guide children based on relevant information.
Motivated by this, we propose a strategy named
**Diagnostic-Oriented Prompting (DOP) to lever-**
age abundant resources and enhance the mathematical correction abilities of LLMs.
In modern educational materials, questions often
come paired with answers, ranging from concise
to detailed responses. Depending on the available
resources, we can employ varying levels of DOP
to enhance the correction abilities of LLMs.
Furthermore, we conducted experiments involving 3 levels of DOP, affirming the effectiveness of
the DOP approach.
The DOP framework comprises three levels,
each with distinct input configurations. In the first
level, the model’s input consists of the mathematical problem, the erroneous solution and the correct
numeric answer(NA) of the problem. In the second
level, the model’s input consists of the problem, the
erroneous solution and the brief explanation(BE)
of the problem. And finally, in the third level, the
model’s input consists of the problem, the erroneous solution and the standard answer(SA) of the
problem. The prompt method that does not provide any additional supplementary information is
labeled as standard prompting(SP).
The goal of DOP is to correct erroneous solution processes and arrive at the correct answer.
The 3 levels of DOP progressively deepen and are
denoted DOP+NA, DOP+BE and DOP+SA. The
complete SP and DOP process is illustrated in Figure 3.
**4** **Expriments and Analysis**
**4.1** **Experiment Setup**
We utilized some LLMs as candidate models, and
collected multiple {Q, A, W } triplets from several mathematical datasets.
**Candidate models. We selected the following**
LLMs as out candidates, which contains some no
1, if Nr(yr) = Nr(A), (3)
0, otherwise.
1, if Nc(yc) = Nc(A), (4)
0, otherwise.
**_Sr =_**
**_Sc =_**
As mentioned above, it is necessary to collect
a wide range of {Q, A, W } triplet. They are all
represented as natural language.We chose several
-----
ics models, and some educational-purpose models.
**MWP:**
**Numeric Answer (NA):** **Wrong Answer(WA)**
**Brief Explanation(BE):**
**Standard Answer(SA):**
**MWP** **WA**
**NA.** **BE.** **SA.**
**Method** **SP** **DOP+NA** **DOP+BE** **DOP+SA**
Figure 3: An example of different levels of DOP.
table general models, some specialized mathemat- learning from evol-instruct feedback for math
- GPT-4-0613(OpenAI, 2023). GPT-4 is one
of the most widely known LLMs, developed
by openai. We selected the latest version.
- GPT-3.5-turbo(OpenAI, 2023). A strong
and remarkable model. It is also known as
ChatGPT, developed by openai.
- LLama-2-Chat(Touvron **et** **al.,** **2023).**
LLama-2 is a collection of LLMs devloped
by Meta and LLama-2-Chat is the fine-tuned
model for dialogue use. We selected 3
parameter size: 7B, 13B and 70B.
- MetaMath(Yu et al., 2023). MetaMath is
a fine-tuned model that specializes in mathematical reasoning. Researchers used a rewrite
strategy to bootstrap math questions and then
fine tune the model. We selected 2 parameter
size: 7B, 13B, pretrained from LLama2, and
a 7B version pretrained on Mistral(Jiang et al.,
2023).
- WizardMath(Luo et al., 2023). WizardMath
is a fine-tuned model using reinforcement
learning from evol-instruct feedback for mathematical reasoning. We selected 2 parameter
size: 7B, 13B.
- Baichuan2(Yang et al., 2023). Baichuan2 is
a series of multilingual LLMs trained from
scratch and perform well on some vertical
domains including education. We selected 2
parameter size:7B, 13B.
**Data Construction. In our experiments, we**
collected sets of Q, A, W triplets, focusing on application problems in primary school mathematics
described in natural language. The datasets we
primarily referred to are as follows:
- GSM8k(Cobbe et al., 2021). GSM8k is a
dataset of 8.5K high quality diverse grade
school math word problems containing natural language solutions.
- MathDial(Macina et al., 2023). MathDial is
a dataset of one-to-one teacher-student tutoring dialogues grounded in multi-step mathematical reasoning problems. Most of the math
problems are from GSM8k.
As MathDial provides problem statements, correct answers, and student confusion, we leveraged
-----
|Model|R-rate C-rate|sR+sC sR+uC uR+sC uR+uC|
|---|---|---|
|GPT-4-0613 GPT-3.5-turbo LLama-2-chat-7b LLama-2-chat-13b LLama-2-chat-70b MetaMath-7b MetaMath-13b MetaMath-Mistral-7b WizardMath-7b WizardMath-13b Baichuan-2-7b Baichuan-2-13b|0.859 0.811 0.556 0.344 0.108 0.089 0.200 0.153 0.318 0.224 0.764 0.180 0.772 0.238 0.733 0.254 0.708 0.391 0.486 0.165 0.079 0.059 0.281 0.105|2152 306 165 238 659 932 325 945 45 264 211 2341 148 424 290 1999 282 629 358 1592 455 1732 61 613 606 1602 76 577 637 1459 91 674 890 1138 229 604 294 1096 177 1294 29 196 139 2497 133 690 186 1872|
Table 1: The performance of candidate models in comparative experiments. The maximum value in each column is
highlighted in bold.
Table 1 shows the performance of candidate
models in comparative experiments. Let’s start by
analyzing the R-rate and C-rate. We can observe
that GPT-4 achieves the highest performance both
Er and Ec in Different Models
0.930.88 0.88 0.89 0.88 EErc
0.8 0.8
0.67
0.62
0.6
Values 0.41 0.44 0.44 0.42
0.4
0.34
0.31 0.3
0.26 0.27
0.21 0.21
0.2 0.180.15 0.170.13 0.16
0.0
GPT-4 GPT-3.5 L-7b L-13b L-70b M-7b M-13b M-M-7b W-7b W-13b B-7b B-13b
Candidate Models
on MWPS and MWPC tasks. This indicates that
as the most advanced general-purpose language
model currently available, GPT-4’s mathematical
capabilities are clearly evident. Meanwhile, the
specialized mathematics models like MetaMath
show strong capabilities in mathematical reasoning, while their error correction abilities still have
considerable room for improvement.
Next, we will analyze the performance of the
models on the following four metrics: sR+sC,
sR+uC, uR+sC and uR+uC. In Table 1, we can
observe that, even in GPT-4, successfully solving
a mathematical problem does not guarantee the
ability to accurately correct an incorrect solution.
Conversely, the model may not always provide an
accurate solution, yet it can generate a proper correction process for an incorrect solution.
We further provides 2 definitions as follows.
**_sR + sC_**
**_Er =_** **_|_** **_|_** (5)
**_|(sR + sC) ∪_** **(uR + sC)|**
**_sR + sC_**
**_Ec =_** **_|_** **_|_** (6)
**_|(sR + sC) ∪_** **(sR + uC)|**
Figure 4: Results of Er and Ec. We represents the
candidate models using the first letters. For example,
’M-M-7b’ means MetaMath-Mistral-7b.
this data to construct a dataset comprising 2,861
sets of {Q, A, W } triplets.
**4.2** **Results and Analysis**
**4.2.1** **Comparative Experiments for**
**Validation.**
For each candidate model in the comparative experiments, we recorded the following information:
- R-rate. The rate of {Q, A, W _} triplets_
which were reasoned successfully.
- C-rate. The rate of {Q, A, W _} triplets_
which were corrected successfully.
- sR+sC. The number of {Q, A, W _} triplets_
which were reasoned and corrected successfully.
- sR+uC. The number of {Q, A, W _} triplets_
which were reasoned successfully but corrected unsuccessfully.
-----
|Question|Francine drives 140km to work each day. If she does not go to work 3 days every week, find the total distance she drives to work for 4 weeks in kilometers.|
|---|---|
|Ground-truth|There are 7 days in a week, so if he doesn’t go to work for 3 days, he goes 7-3 = 4 days every week. He travels 140km each day for a weekly total of 140*4 = 560km. In 4 weeks he will travel a total of 560 * 4 = 2240km.|
|Answer from GPT-4|Francine goes to work 7 - 3 = 4 days a week. This means that she drives a total of 140 * 4 =560km every week. Therefore in 4 weeks, she drives a total of 560 * 4 = 2240km.|
|Student’s solution|In a week, Francine drives 140km x 5 = 700km to work. In 4 weeks, she drives 700km x 4 = 2800km to work. But she does not go to work for 3 days every week, so in 4 weeks, she does not go to work for 3 x 4 = 12 days. Therefore, the total distance she drives to work in 4 weeks is (700km x 4) - (140km x 12) = 2800km - 1680km = 1120km.|
|Correction From GPT-4|The student’s mistake lies in the fact that they incorrectly subtracted the days she does not work from the total distance she drives to work. In reality, the days Francine does not work have already been accounted for when calculating the weekly distance. Here is the correct solution: In a week Francine drives 140km x 5 days = 700km to work. In 4 weeks, she drives 700km x 4 = 2800km to work. There is no need to subtract the days she does not work because they have already been accounted for in the initial weekly calculation. Answer: 2800km.|
Table 2: A case study of GPT-4. The error of this students lies in that he or she thinks there are five days in a week.
GPT-4 effectively solved this problem. However, GPT-4 didn’t capture the student’s error.
As we mentioned above, Er represents the ratio
of the corrected numbers to the total reasoned numbers, while Ec represents the ratio of the reasoned
numbers to the total corrected numbers. We displays the value of Er and Ec in our experiments
in Figure 4.
In Figure 4, we can observe that all our candidate
models achieve higher Er than Ec. This suggests
that if a model can successfully correct an error, it
is more likely to solve the problem simultaneously.
However, when the model is capable of solving
a problem, the probability of correcting a related
incorrect solution is much lower.
We also provide a case study in our experiment,
as shown in Table 2. The mathematical problem
requires finding the distance Francine has traveled
during her 4-week work. GPT-4 effectively solved
this problem. However, when faced with a student who miscalculated the number of working
days, GPT-4 did not successfully correct its mistake. This indicates that for LLMs, successfully
solving a mathematical problem does not necessarily mean they can successfully correct any errors
that may arise within it. Similarly, successfully
correcting an error within a mathematical problem
does not imply that they can also successfully solve
the problem.
To conclude, combining the result from Figure 4
and Table 2, we successfully demonstrate through
comparative experiments that the ability of LLMs
**in mathematical reasoning is not entirely equiva-**
**lent to their ability in mathematical error correc-**
**tion. Therefore, solely enhancing a model’s math-**
ematical problem-solving ability does not guarantee its proficiency as an error corrector. Further
research is needed to thoroughly investigate the
model’s error correction capabilities in mathematics.
**4.2.2** **Diagnostic-Oriented Prompting(DOP)**
For the DOP framework mentioned in Figure 2 and
Figure 3, we conducted experiments involving 3
levels of DOP with several candidate models.
We studied DOP in 8 candidate models, comparing the correction passing rate between SP and
DOP. We record the experimental results in Figure
5.
We found that when employing DOP, all candidate models achieved higher pass rates compared
to using SP alone during the MPWC task. This suggests that the DOP method significantly enhances
-----
Results of DOP Experiments
GPT-3.5 L-7b L-13b L-70b M-7b M-13b W-7b W-13b
0.675 SP
0.646 DOP+NA
DOP+BE
0.593 DOP+SA
0.553
0.53 0.532
0.51
0.489
0.465
0.448
0.42
0.391 0.386
0.344 0.343 0.3440.363 0.3430.358
0.317
0.292
0.274
0.214 0.256 0.224 0.238 0.252
0.18
0.152 0.153 0.165
0.089
Candidate Models
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
Figure 5: Experiment results of DOP. We recorded the success rates of error correction under different scenarios
and visualized them as bar charts.
the mathematical error-correction capabilities of
LLMs.
**5** **Conclusions**
math world problems.
**6** **Limitations and Future Work**
We have several limitations in this work.Firstly,
there are still lack of high-quality mathematical
correction datasets to study the relative abilities of
LLMs. Meanwhile, we study correction mainly
based on all kinds of language models. In fact,
the behaviour of LLMs and human teachers and
students differs a lot. We still need deeper research
in the field. To study this issue well, our future
work is as follows:
- Collect high-quality MWPs and corre**sponding mistakes. High-quality data is vital**
for us to enhance the performance of LLMs.
Most mainstream datasets in mathematical domain are lack of some relevant solutions with
errors, which is not helpful to study the correcting abilities of LLMs. As a result, we are
committed to construct a high-quality dataset
containg MWPs and corresponding mistakes.
- We need a deeper view of real-time mathe**matical education scenarios. The behaviours**
between human and language models differs a
lot. We also need some data from the real life,
not just merely from the language models. In
the future, it is necessary for us to go deeper
to the real-time education scenarios.
- Develop more level of DOP. We have broken
the mold and proven the effectiveness of DOP.
It is still necessary to develop a higher level
of DOP method.
In this paper, we have come to the following conclusions.
**1.LLMs’ reasoning and correcting abilities**
**are not fully equivalent. In our comparative ex-**
periment, LLMs may solve a problem but fail to
correct a wrong solution of this problem. Also,
they may not solve a problem properly, but can
find reasoning errors and correct them in a wrong
solution.
**2.Mainstream LLMs’ have stronger reason-**
**ing abilities than correcting abilities. In our ex-**
periments, our candidate models perform better in
reasoning task than correcting tasks. This suggests
that while LLMs excel as reasoners, their ability to
correct errors is limited. Therefore, further research
into their correction abilities is necessary.
**3.Improving LLMs’ correcting abilities is vi-**
**tal and essential. In mathematical education sce-**
narios, it is more vital to correct the error from the
students, rather than merely providing solutions.
Since we have demonstrated that reasoning and
correcting abilities are not the same thins, and reasoning abilities are much better, improving LLMs’
correcting abilities bocomes vital and important.
**4.Diagnostic-Oriented Prompting(DOP) is an**
**effective method to enhance the correcting abili-**
**ties of LLMs. We modify the research paradigm**
of the mainstream research and proposes MWPC
task. With the aid of educational resources and
DOP, LLMs can be an excellent corrector, which is
useful to help students dealing with understanding
-----
**References**
Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng,
[Jian-Guang Lou, and Weizhu Chen. 2023. Learning](http://arxiv.org/abs/2310.20689)
[from mistakes makes llm better reasoner.](http://arxiv.org/abs/2310.20689)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems. CoRR, abs/2110.14168.](http://arxiv.org/abs/2110.14168)
Hanyue Du, Yike Zhao, Qingyuan Tian, Jiani Wang, Lei
[Wang, Yunshi Lan, and Xuesong Lu. 2023. Flacgec:](https://doi.org/10.1145/3583780.3615119)
[A chinese grammatical error correction dataset with](https://doi.org/10.1145/3583780.3615119)
[fine-grained linguistic annotation. In Proceedings](https://doi.org/10.1145/3583780.3615119)
_of the 32nd ACM International Conference on In-_
_formation and Knowledge Management, CIKM ’23._
ACM.
Zhengxiao Du, Yujie Qian, Xiao Liu, Ming Ding,
[Jiezhong Qiu, Zhilin Yang, and Jie Tang. 2022. GLM:](https://doi.org/10.18653/V1/2022.ACL-LONG.26)
[general language model pretraining with autoregres-](https://doi.org/10.18653/V1/2022.ACL-LONG.26)
[sive blank infilling. In Proceedings of the 60th An-](https://doi.org/10.18653/V1/2022.ACL-LONG.26)
_nual Meeting of the Association for Computational_
_Linguistics (Volume 1: Long Papers), ACL 2022,_
_Dublin, Ireland, May 22-27, 2022, pages 320–335._
Association for Computational Linguistics.
Zhibin Gou, Zhihong Shao, Yeyun Gong, Yelong Shen,
Yujiu Yang, Minlie Huang, Nan Duan, and Weizhu
Chen. 2023. [Tora: A tool-integrated reasoning](https://doi.org/10.48550/ARXIV.2309.17452)
[agent for mathematical problem solving.](https://doi.org/10.48550/ARXIV.2309.17452) _CoRR,_
abs/2309.17452.
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023.
[Large language models are reasoning teachers. In](https://doi.org/10.18653/V1/2023.ACL-LONG.830)
_Proceedings of the 61st Annual Meeting of the As-_
_sociation for Computational Linguistics (Volume 1:_
_Long Papers), ACL 2023, Toronto, Canada, July 9-14,_
_2023, pages 14852–14882. Association for Computa-_
tional Linguistics.
Jie Huang, Xinyun Chen, Swaroop Mishra,
Huaixiu Steven Zheng, Adams Wei Yu, Xiny[ing Song, and Denny Zhou. 2023. Large language](https://doi.org/10.48550/ARXIV.2310.01798)
[models cannot self-correct reasoning yet.](https://doi.org/10.48550/ARXIV.2310.01798) _CoRR,_
abs/2310.01798.
Shima Imani, Liang Du, and Harsh Shrivastava. 2023.
[Mathprompter: Mathematical reasoning using large](https://doi.org/10.18653/V1/2023.ACL-INDUSTRY.4)
[language models. In Proceedings of the The 61st An-](https://doi.org/10.18653/V1/2023.ACL-INDUSTRY.4)
_nual Meeting of the Association for Computational_
_Linguistics: Industry Track, ACL 2023, Toronto,_
_Canada, July 9-14, 2023, pages 37–42. Association_
for Computational Linguistics.
Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu,
Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea
[Madotto, and Pascale Fung. 2023. Survey of halluci-](https://doi.org/10.1145/3571730)
[nation in natural language generation. ACM Comput.](https://doi.org/10.1145/3571730)
_Surv., 55(12):248:1–248:38._
Albert Qiaochu Jiang, Alexandre Sablayrolles, Arthur
Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de Las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier,
L’elio Renard Lavaud, Marie-Anne Lachaux, Pierre
Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang,
[Timothée Lacroix, and William El Sayed. 2023. Mis-](https://api.semanticscholar.org/CorpusID:263830494)
[tral 7b. ArXiv, abs/2310.06825.](https://api.semanticscholar.org/CorpusID:263830494)
Takeshi Kojima, Shixiang (Shane) Gu, Machel Reid, Yu[taka Matsuo, and Yusuke Iwasawa. 2022. Large lan-](https://proceedings.neurips.cc/paper_files/paper/2022/file/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf)
[guage models are zero-shot reasoners. In Advances in](https://proceedings.neurips.cc/paper_files/paper/2022/file/8bb0d291acd4acf06ef112099c16f326-Paper-Conference.pdf)
_Neural Information Processing Systems, volume 35,_
pages 22199–22213. Curran Associates, Inc.
Sang Yun Kwon, Gagan Bhatia, El Moatez Billah
[Nagoudi, and Muhammad Abdul-Mageed. 2023. Be-](http://arxiv.org/abs/2312.08400)
[yond english: Evaluating llms for arabic grammatical](http://arxiv.org/abs/2312.08400)
[error correction.](http://arxiv.org/abs/2312.08400)
Juho Leinonen, Paul Denny, Stephen MacNeil, Sami
Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran,
[and Arto Hellas. 2023. Comparing code explanations](http://arxiv.org/abs/2304.03938)
[created by students and large language models.](http://arxiv.org/abs/2304.03938)
Tengxiao Liu, Qipeng Guo, Yuqing Yang, Xiangkun Hu,
Yue Zhang, Xipeng Qiu, and Zheng Zhang. 2023a.
[Plan, verify and switch: Integrated reasoning with](https://aclanthology.org/2023.emnlp-main.169)
[diverse x-of-thoughts. In Proceedings of the 2023](https://aclanthology.org/2023.emnlp-main.169)
_Conference on Empirical Methods in Natural Lan-_
_guage Processing, EMNLP 2023, Singapore, Decem-_
_ber 6-10, 2023, pages 2807–2822. Association for_
Computational Linguistics.
[Tiedong Liu and Bryan Kian Hsiang Low. 2023. Goat:](https://doi.org/10.48550/ARXIV.2305.14201)
[Fine-tuned llama outperforms GPT-4 on arithmetic](https://doi.org/10.48550/ARXIV.2305.14201)
[tasks. CoRR, abs/2305.14201.](https://doi.org/10.48550/ARXIV.2305.14201)
Yixin Liu, Avi Singh, C. Daniel Freeman, John D. Co[Reyes, and Peter J. Liu. 2023b. Improving large lan-](https://doi.org/10.48550/ARXIV.2310.10047)
[guage model fine-tuning for solving math problems.](https://doi.org/10.48550/ARXIV.2310.10047)
_CoRR, abs/2310.10047._
Phuoc Pham Van Long, Duc Anh Vu, Nhat M. Hoang,
[Xuan Long Do, and Anh Tuan Luu. 2023. Chatgpt as](http://arxiv.org/abs/2312.01661)
[a math questioner? evaluating chatgpt on generating](http://arxiv.org/abs/2312.01661)
[pre-university math questions.](http://arxiv.org/abs/2312.01661)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei
[Lin, Shifeng Chen, and Dongmei Zhang. 2023. Wiz-](https://doi.org/10.48550/ARXIV.2308.09583)
[ardmath: Empowering mathematical reasoning for](https://doi.org/10.48550/ARXIV.2308.09583)
[large language models via reinforced evol-instruct.](https://doi.org/10.48550/ARXIV.2308.09583)
_CoRR, abs/2308.09583._
Jakub Macina, Nico Daheim, Sankalan Pal Chowdhury,
Tanmay Sinha, Manu Kapur, Iryna Gurevych, and
[Mrinmaya Sachan. 2023. Mathdial: A dialogue tutor-](https://aclanthology.org/2023.findings-emnlp.372)
[ing dataset with rich pedagogical properties grounded](https://aclanthology.org/2023.findings-emnlp.372)
[in math reasoning problems. In Findings of the Asso-](https://aclanthology.org/2023.findings-emnlp.372)
_ciation for Computational Linguistics: EMNLP 2023,_
_Singapore, December 6-10, 2023, pages 5602–5621._
Association for Computational Linguistics.
Stephen MacNeil, Paul Denny, Andrew Tran, Juho
Leinonen, Seth Bernstein, Arto Hellas, Sami Sarsa,
-----
[and Joanne Kim. 2023. Decoding logic errors: A](http://arxiv.org/abs/2311.16017)
[comparative study on bug detection by students and](http://arxiv.org/abs/2311.16017)
[large language models.](http://arxiv.org/abs/2311.16017)
Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler
Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon,
Nouha Dziri, Shrimai Prabhumoye, Yiming Yang,
Shashank Gupta, Bodhisattwa Prasad Majumder,
Katherine Hermann, Sean Welleck, Amir Yazdan[bakhsh, and Peter Clark. 2023. Self-refine: Iterative](http://arxiv.org/abs/2303.17651)
[refinement with self-feedback.](http://arxiv.org/abs/2303.17651)
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
Liangming Pan, Michael Saxon, Wenda Xu, Deepak
Nathani, Xinyi Wang, and William Yang Wang. 2023.
[Automatically correcting large language models: Sur-](http://arxiv.org/abs/2308.03188)
[veying the landscape of diverse self-correction strate-](http://arxiv.org/abs/2308.03188)
[gies.](http://arxiv.org/abs/2308.03188)
Yixiao Song, Kalpesh Krishna, Rajesh Bhatt, Kevin
[Gimpel, and Mohit Iyyer. 2023. Gee! grammar error](http://arxiv.org/abs/2311.09517)
[explanation with large language models.](http://arxiv.org/abs/2311.09517)
Kaya Stechly, Matthew Marquez, and Subbarao Kamb[hampati. 2023. Gpt-4 doesn’t know it’s wrong: An](http://arxiv.org/abs/2310.12397)
[analysis of iterative prompting for reasoning prob-](http://arxiv.org/abs/2310.12397)
[lems.](http://arxiv.org/abs/2310.12397)
Chenming Tang, Xiuyu Wu, and Yunfang Wu. 2023.
[Are pre-trained language models useful for model](https://doi.org/10.18653/V1/2023.ACL-SHORT.77)
[ensemble in chinese grammatical error correction?](https://doi.org/10.18653/V1/2023.ACL-SHORT.77)
In Proceedings of the 61st Annual Meeting of the
_Association for Computational Linguistics (Volume 2:_
_Short Papers), ACL 2023, Toronto, Canada, July 9-14,_
_2023, pages 893–901. Association for Computational_
Linguistics.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton
Ferrer, Moya Chen, Guillem Cucurull, David Esiobu,
Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller,
Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan
Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa,
Isabel Kloumann, Artem Korenev, Punit Singh Koura,
Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten,
Ruan Silva, Eric Michael Smith, Ranjan Subramanian, Xiaoqing Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu,
Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan,
Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, and Thomas
[Scialom. 2023. Llama 2: Open foundation and fine-](http://arxiv.org/abs/2307.09288)
[tuned chat models.](http://arxiv.org/abs/2307.09288)
Karthik Valmeekam, Matthew Marquez, and Subbarao
[Kambhampati. 2023a. Can large language models](http://arxiv.org/abs/2310.08118)
[really improve by self-critiquing their own plans?](http://arxiv.org/abs/2310.08118)
Karthik Valmeekam, Matthew Marquez, Sarath Sreed[haran, and Subbarao Kambhampati. 2023b. On the](http://arxiv.org/abs/2305.15771)
[planning abilities of large language models : A criti-](http://arxiv.org/abs/2305.15771)
[cal investigation.](http://arxiv.org/abs/2305.15771)
Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu,
Yunshi Lan, Roy Ka-Wei Lee, and Ee-Peng Lim.
[2023a. Plan-and-solve prompting: Improving zero-](https://doi.org/10.18653/V1/2023.ACL-LONG.147)
[shot chain-of-thought reasoning by large language](https://doi.org/10.18653/V1/2023.ACL-LONG.147)
[models. In Proceedings of the 61st Annual Meeting](https://doi.org/10.18653/V1/2023.ACL-LONG.147)
_of the Association for Computational Linguistics (Vol-_
_ume 1: Long Papers), ACL 2023, Toronto, Canada,_
_July 9-14, 2023, pages 2609–2634. Association for_
Computational Linguistics.
Rose E. Wang, Qingyang Zhang, Carly Robinson, Su[sanna Loeb, and Dorottya Demszky. 2023b. Step-by-](http://arxiv.org/abs/2310.10648)
[step remediation of students’ mathematical mistakes.](http://arxiv.org/abs/2310.10648)
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V.
Le, Ed H. Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023c. [Self-consistency](https://openreview.net/pdf?id=1PL1NIMMrw)
[improves chain of thought reasoning in language](https://openreview.net/pdf?id=1PL1NIMMrw)
[models. In The Eleventh International Conference](https://openreview.net/pdf?id=1PL1NIMMrw)
_on Learning Representations, ICLR 2023, Kigali,_
_Rwanda, May 1-5, 2023. OpenReview.net._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le,
[and Denny Zhou. 2022. Chain-of-thought prompt-](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
[ing elicits reasoning in large language models. In](https://proceedings.neurips.cc/paper_files/paper/2022/file/9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf)
_Advances in Neural Information Processing Systems,_
volume 35, pages 24824–24837. Curran Associates,
Inc.
Haoyi Wu, Wenyang Hui, Yezeng Chen, Weiqi Wu,
[Kewei Tu, and Yi Zhou. 2023. Conic10k: A chal-](http://arxiv.org/abs/2311.05113)
[lenging math problem understanding and reasoning](http://arxiv.org/abs/2311.05113)
[dataset.](http://arxiv.org/abs/2311.05113)
Ai Ming Yang, Bin Xiao, Bingning Wang, Borong
Zhang, Ce Bian, Chao Yin, Chenxu Lv, Da Pan, Dian
Wang, Dong Yan, Fan Yang, Fei Deng, Feng Wang,
Feng Liu, Guangwei Ai, Guosheng Dong, Hai Zhao,
Hang Xu, Hao-Lun Sun, Hongda Zhang, Hui Liu,
Jiaming Ji, Jian Xie, Juntao Dai, Kuncheng Fang,
Lei Su, Liang Song, Lifeng Liu, Liyun Ru, Luyao
Ma, Mang Wang, Mickel Liu, MingAn Lin, Nuolan
Nie, Pei Guo, Ruiyang Sun, Zhang Tao, Tianpeng Li,
Tianyu Li, Wei Cheng, Weipeng Chen, Xiangrong
Zeng, Xiaochuan Wang, Xiaoxi Chen, Xin Men, Xin
Yu, Xuehai Pan, Yan-Bin Shen, Yiding Wang, Yiyu
Li, Youxin Jiang, Yuchen Gao, Yupeng Zhang, Zenan
[Zhou, and Zhiying Wu. 2023. Baichuan 2: Open](https://api.semanticscholar.org/CorpusID:261951743)
[large-scale language models. ArXiv, abs/2309.10305.](https://api.semanticscholar.org/CorpusID:261951743)
Longhui Yu, Weisen Jiang, Han Shi, Jincheng Yu,
Zhengying Liu, Yu Zhang, James T. Kwok, Zhenguo
[Li, Adrian Weller, and Weiyang Liu. 2023. Meta-](https://doi.org/10.48550/ARXIV.2309.12284)
[math: Bootstrap your own mathematical questions](https://doi.org/10.48550/ARXIV.2309.12284)
[for large language models. CoRR, abs/2309.12284.](https://doi.org/10.48550/ARXIV.2309.12284)
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex
[Smola. 2023. Automatic chain of thought prompting](https://openreview.net/pdf?id=5NTt8GFjUHkr)
[in large language models. In The Eleventh Inter-](https://openreview.net/pdf?id=5NTt8GFjUHkr)
_national Conference on Learning Representations,_
-----
_ICLR 2023, Kigali, Rwanda, May 1-5, 2023. Open-_
Review.net.
Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang,
Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen
Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen
Yang, Yushuo Chen, Z. Chen, Jinhao Jiang, Ruiyang
Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu,
[Jianyun Nie, and Ji rong Wen. 2023. A survey of](https://api.semanticscholar.org/CorpusID:257900969)
[large language models. ArXiv, abs/2303.18223.](https://api.semanticscholar.org/CorpusID:257900969)
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc V. Le, and Ed H.
[Chi. 2023. Least-to-most prompting enables com-](https://openreview.net/pdf?id=WZH7099tgfM)
[plex reasoning in large language models. In The](https://openreview.net/pdf?id=WZH7099tgfM)
_Eleventh International Conference on Learning Rep-_
_resentations, ICLR 2023, Kigali, Rwanda, May 1-5,_
_2023. OpenReview.net._
-----
| [
"Hao, Chen",
"Biaojie, Zeng",
"Xin, Lin",
"Aimin, Zhou",
"Liang, He"
] | 2024-05-20T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2405.12100 | https://arxiv.org/abs/2405.12100 | https://www.semanticscholar.org/paper/490cd30b2e0bd83cdc62d28ffdd4279931522b9c |
Data Augmentation with In-Context Learning and Comparative Evaluation in Math Word Problem Solving | Math Word Problem (MWP) solving presents a challenging task in Natural Language Processing (NLP). This study aims to provide MWP solvers with a more diverse training set, ultimately improving their ability to solve various math problems. We propose several methods for data augmentation by modifying the problem texts and equations, such as synonym replacement, rule-based: question replacement, and rule based: reversing question methodologies over two English MWP datasets. This study extends by introducing a new in-context learning augmentation method, employing the Llama-7b language model. This approach involves instruction-based prompting for rephrasing the math problem texts. Performance evaluations are conducted on 9 baseline models, revealing that augmentation methods outperform baseline models. Moreover, concatenating examples generated by various augmentation methods further improves performance. | This study aims to provide MWP solvers with a more diverse training set, ultimately improving their ability to solve various math problems by introducing a new in-context learning augmentation method, employing the Llama-7b language model. | ## Data Augmentation with In-Context Learning and Comparative Evaluation in Math Word Problem Solving
Gulsum Yigit[1,2*][†] and Mehmet Fatih Amasyali[1][†]
1*Department of Computer Engineering, Yildiz Technical University,
Istanbul, Turkey.
2Department of Computer Engineering, Kadir Has University, Istanbul,
Turkey.
*Corresponding author(s). E-mail(s): [email protected];
Contributing authors: [email protected];
**_†These authors contributed equally to this work._**
**Abstract**
Math Word Problem (MWP) solving presents a challenging task in Natural Language Processing (NLP). This study aims to provide MWP solvers with a more
diverse training set, ultimately improving their ability to solve various math
problems. We propose several methods for data augmentation by modifying the
problem texts and equations, such as synonym replacement, rule-based: question replacement, and rule based: reversing question methodologies over two
English MWP datasets. This study extends by introducing a new in-context
learning augmentation method, employing the Llama-7b language model. This
approach involves instruction-based prompting for rephrasing the math problem
texts. Performance evaluations are conducted on 9 baseline models, revealing
that augmentation methods outperform baseline models. Moreover, concatenating examples generated by various augmentation methods further improves
performance.
**Keywords: Question Answering, Math Word Problem Solving, Data Augmentation,**
In-Context Learning, Llama-7b
-----
### 1 Introduction
Question Answering (QA) systems are instrumental in Natural Language Processing
(NLP). QA systems are responsible for understanding and reacting to the user questions in a manner resembling a human [1–4]. This makes them essential for various
applications such as search engines, virtual assistants, information retrieval systems,
etc. Enhancing these systems is critical as the demand for instant and accurate information retrieval continues. The structure of QA systems varies based on the domain
and the types of questions they have [5–7]. One important form is solving Math Word
Problems (MWPs).
MWPs are a complex category within the field of QA systems [8–13]. Challenges
in these systems require knowledge beyond simple pattern recognition and keyword
matching. These special systems involve understanding given mathematical operators,
quantities, and their complex relationships with each other. MWPs aim to produce
a solution equation. Therefore, it requires identifying numerical values provided in
context, carefully selecting appropriate mathematical operations, and transforming
them into particular mathematical expressions involving unknown variables.
Various datasets have been produced in MWPs with unique features and purposes
to benefit research and growth in this field. These datasets vary in source, complexity,
scale, and the number of unknowns. For example, Draw1K [14], HMWP [15], ALG514
[16], MAWPS [17] emphasize complex multi-unknown word problems, while SVAMP
[18], MAWPS-Single [17], ASDIV-A [19], Math23K [20] focus on elementary math
questions with a single unknown. Despite their differences, these datasets share common challenges, such as the need for semantic understanding, extracting numerical
information, and changing textual context into mathematical expressions. Addressing
these challenges is essential for developing practical solutions in MWP scenarios, highlighting the significance of these datasets in advancing the state-of-the-art of MWP
systems.
MWPs include a variety of approaches. Statistical methods often use statistical
patterns and correlations within the dataset [21, 22]. Rule-based approaches involve
applying predefined rules to solve MWPs [16, 23–25]. Semantic parsing methods
attempt to understand the underlying meaning of the text and transform it into
mathematical expressions. On the other hand, deep learning-based models use neural
networks to extract contextual information and capture complex relationships between
textual context and mathematical concepts [20, 26]. Moreover, there are MWP solvers
that have been developed to include pre-trained models. Pre-trained models such as
BERT, GPT and their variants have gained great importance recently. These models
are first pre-trained on massive amounts of textual data, which used to capture complex linguistic patterns and contextual information. When fine-tuned to solve MWP,
they show significant improvements in understanding the language used in MWPs.
These different approaches enable a more comprehensive exploration of potential
solutions in the field of MWPs.
Data augmentation in NLP is essential to improve the performance and robustness of the models. It helps to increase model robustness and reduce overfitting by
providing multiple variations of existing data. However, challenges arise, especially in
-----
maintaining semantics. Simple transformations on textual data can change the meaning of the original context. There are other significant challenges to implementing
data augmentation in an MWP dataset. One issue is model bias, where augmented
data tends towards specific problems. This leads to inaccurate predictions and biased
results. Processing numerical values in the generated augmented samples introduces
another problem: the augmentation procedure may produce inconsistent numerical
information. Additionally, modifying the original data distribution through data augmentation increases the likelihood of overfitting or underfitting for specific problems.
Factors such as data sparsity issues and computational overhead contribute to the
difficulties in data augmentation in the MWP datasets.
In this study, our primary focus has been on enriching MWP datasets via useful
data augmentation methods. We aim to augment training data by modifying the
source text and equations. Building on our previous study in [27], we introduced three
approaches for data augmentation: Rule-Based: Question Replacement, Rule-Based:
Reversing Question and Substitution: Synonym Replacement. Besides, we extended
our work by presenting a novel in-context learning augmentation method, leveraging
the Llama-7b language model [28]. This method employs instruction-based prompting
for rephrasing problem texts. The model generates a rephrased version for each training
example, resulting in new samples after filtering and numeric modification steps. In
this study, we have the following contributions.
- We proposed several augmentation methods: Rule-Based: Question Replacement,
Rule-Based: Reversing Question, and Substitution: Synonym Replacement. Besides,
we extended our work by presenting a novel in-context learning augmentation
method, leveraging the Llama-7b language model.
- We also extended prior experiments and tested these augmentation methods on 9
different baseline models using the MWPS-Single and SVAMP datasets. By comparing the results with the baseline of an earlier study [29], we found that these
augmentation methods consistently lead to improved performance in MWP solving.
This paper is structured as follows: Section 2 provides a review of MWP solving
systems. Section 3 delves into the introduced augmentation methods. In Section 4,
detailed information about the experiments is presented. We performed comprehensive
experiments on 2 MWP datasets and evaluated 9 different models. Section 5 provides
a discussion about approaches and Section 6 concludes the paper.
### 2 Related Work
The investigation of automatic MWP solvers has attracted considerable attention by
operating various approaches. One early method is the rule-based approach, where
decisions are based on a predefined set of rules. This method maps equations into templates, extracting predefined patterns from the problem texts [16, 23–25]. Studies using
rule-based systems have demonstrated satisfactory performances, specifically on smallscale MWP datasets. Nevertheless, a noteworthy disadvantage of this method is its
dependency on human intervention to create templates. Preparing effectual templates
-----
requires a profound understanding of problem structures. It makes the rule-based systems challenging as the complexity of the problems increases. The work in [25] presents
MSWPAS-CP, a computer simulation system developed to help students solve multistep arithmetic word problems. The system processes natural language word problems
into frames. It performs calculus based on these frames.
Another method in earlier investigations concerns a statistical-based procedure,
where a statistical classifier makes decisions. This approach depends on analyzing
statistical patterns to make predictions [21, 30, 31]. The study in [31] introduces an
algorithm for automatically solving algebra word problems. It analyzes a hypothesis
space that contains all potential equations derived from assigning numbers. Via training a log-linear model to optimize the margin between correct and false assignments,
the algorithm efficiently addresses a quadratic programming problem. Moreover, some
earlier works apply semantic parsing [31–33]. Zhou et al. introduced a semantic parsing
and reasoning approach that utilizes a new meaning representation language (DOL)
to connect natural language text and mathematical expressions, employing a parser
to convert text into DOL trees [31].
Research has moved towards utilizing deep learning-based approaches to reduce
human intervention and improve the MWP solvers. One well-known method concerns
using Sequence-to-Sequence (Seq2Seq) models to enhance the performance of MWP
solvers [20, 26, 34, 35]. Huang et al. suggested incorporating a copy-and-alignment
mechanism into the traditional Seq2Seq model [35]. Notably, in a study by Wang et al.,
Recurrent Neural Networks (RNNs) were utilized to transform MWPs into solution
equations [20]. This usage of Seq2Seq models seeks to follow the sequential nature of
language and problem-solving structures. Similarly, in [26], the approach comprises
equation normalization to handle the challenge of duplicate equations. However, one
disadvantage of Seq2Seq models is their potential to yield invalid solution equations
due to a lack of management over the decoder during the generation process. The
decoder, accountable for constructing the output sequence, may generate solutions
that may not be valid in the context of the given problem. This restriction underlines
the need to distill Seq2Seq models to create proper and contextually appropriate
solution equations.
Some studies have considered integrating expression trees to address the issue of
generating valid solution equations using Seq2Seq models. These models are generalized as Seq2Tree models [8, 11]. These models go beyond the conventional Seq2Seq
architectures, which construct the output as an expression tree, seeking more specific
and contextually relevant solution equations. In a study by Xie et al., the authors utilized a Long Short-Term Memory (LSTM) encoder to encode the problem text and
presented a tree-based decoder to generate the equation expression [8]. This Seq2Tree
model denotes a more hierarchical and structured representation of solution equations.
A template-based model was presented in another approach proposed in [11]. This
model interests constructing a Seq2Seq model to predict the tree-structured template
for the solution equation. The predicted numeric values are depicted as leaf nodes,
while the operators perform as non-leaf nodes in the solution expression tree. This
Seq2Tree model provides a more structured outcome and mitigates the case of generating invalid or contextually improper solution equations. Various other models, such as
-----
Chiang et al. with a semantic tracking stack [36], Li et al. [37] incorporating different
functional multihead attentions, and Meng et al. [38] applying double sequence-based
decoders, have been explored.
MWP solvers have seen an expansion towards Graph2Tree approaches [12, 13].
The Graph2Tree method parses the problem text to construct binary trees that preserve numerical data and mathematical operators. Li et al. combined the dependency
parse tree and constituency tree from text descriptions, while Zhang et al. developed
the quantity cell and comparison graphs [12, 39]. These approaches include structural
information from text descriptions. Shen et al. applied a hybrid technique, integrating
a sequence-based encoder with a graph-based decoder [13]. This fusion is proposed to
improve text representations and generate various solution equations. Incorporating
a graph-based decoder permitted the model to offer a more comprehensive understanding and enabled the generation of varied and contextually appropriate solution
equations.
Beyond Graph2Tree strategies, some investigators have examined the integration of
pre-trained language models [10, 40–42]. Liang et al. presented MWP-BERT, leveraging the capabilities of pre-trained language models tailored explicitly for MWP solvers
[10]. This approach emphasizes the contextual understanding grabbed by pre-trained
models, achieving higher performances by producing more accurate and contextually
relevant solution equations.
The emergence of Large Language Models (LLMs) has prompted a surge in innovative approaches to solving MWPs. Many researchers have made notable contributions
to this field, as evidenced by significant advancements [43–48]. Prompt-based learning
has gained attraction for its ability to leverage LLMs’ inherent prediction capabilities.
Unlike traditional supervised learning, prompt-based learning utilizes text prompts to
guide the model’s responses, often in a few-shot or zero-shot format. Lazaridou et al.
and Chen et al. have explored the optimization of prompts for different tasks, facilitating the generation of desired responses from large models without requiring extensive
fine-tuning [47, 49].
Furthermore, Chain of Thought (CoT) prompting has emerged as a promising
approach, generating reasoning steps to guide the model in deriving the actual answer.
This method not only allows the model to arrive at reasonable conclusions but also
enhances its explainability. Wei et al. demonstrated the effectiveness of CoT prompting compared to conventional prompt methods in MWPs [50]. Li et al. presented
DIVERSE (Diverse Verifier on Reasoning Step) as a novel approach to enhance language models’ reasoning capability [44]. It contains three main parts: diverse prompts
generation to investigate different reasoning paths for the same question, utilization
of a verifier to filter out incorrect answers based on a weighted voting scheme, and
verification of each reasoning step individually instead of the entire chain. Wang et
al. presented a novel decoding strategy, self-consistency by sampling diverse reasoning
paths and selecting the most consistent answer. Empirical evaluations across various
arithmetic reasoning benchmarks show that self-consistency significantly improves the
performance of CoT prompting [45].
-----
Chen et al. introduced a “Program of Thoughts” (PoT), which employs language
models, primarily Codex, to generate both text and programming language statements, culminating in an answer [47]. PoT underwent evaluation in both few-shot and
zero-shot settings, revealing improvements in performance. The authors highlight significant limitations observed in CoT, including the propensity of LLMs to encounter
arithmetic errors, particularly with large numbers, the challenge faced by LLMs in
solving complex mathematical expressions such as polynomial or differential equations,
and the inefficiency of LLMs in expressing iteration, especially with numerous steps.
In PoT, computation is delegated to an external language interpreter, while reasoning
steps are articulated as Python programs by Language Models.
### 3 Proposed Augmentation Methods
**3.1 Rephrase with In-Context Learning**
In the previous study, an investigation into the impact of paraphrasing models on
MWPs highlighted challenges in generating rephrased texts while preserving the
integrity of numerical quantities and their relationships [27]. In this study, we present
a new method that introduces a sophisticated approach employing in-context learning
to create rephrased examples for MWP datasets. This approach follows the following
three steps.
- Step 1: In-Context Learning: In the initial step of the proposed method, 15
examples from an MWP dataset are presented to the Llama-7b language model
[28], a powerful tool for natural language understanding. The 15 MWP samples
were randomly selected. No specific heuristic was employed in the selection process.
Then, the instruction as given below was followed by 15 examples and the rephrased
text of these examples. ChatGPT is used to generate the rephrased texts for these
15 examples [51]. The model is instructed to generate a rephrased version of each
training example. The objective is to leverage in-context learning to produce texts
while preserving the inherent mathematical structure.
-----
- Step 2: Filtering Mechanism: Following the generation process, a specific filtering mechanism is implemented to pick the appropriate ones from the rephrased
examples. This process involves two key aspects:
– Validity Check: The filtering mechanism excluded samples lacking a “?” to ensure
they represent valid questions in line with MWPs structures. However, this may
not always catch all the valid questions, especially when problem statements end
with phrases like “Find the value of ...”. While “?” serves as a straightforward
indicator, it may not capture all valid problem structures. As a result, we risk
excluding valid rephrased versions without question marks. Furthermore, there
are instances where the model generates empty strings, leading to the absence of
rephrased versions for certain samples. It’s noteworthy that, as shown in Table
2, occurrences of these conditions are relatively low, especially in the SVAMP
dataset.
– Diversity Enhancement: Rephrased examples identical to the original ones are
omitted to enhance diversity in the augmented dataset. In step 1, we employ
an LLM to generate diverse variations of the original questions. However, due
to the inherent nature of the model, there were cases where the rephrased produced questions were identical to the original ones. Therefore, we implemented
an automatic process to identify and remove these duplicate examples to address
this issue. If an exact match was found, indicating redundancy in the augmented
dataset, those examples were eliminated to ensure the diversity and uniqueness
of the dataset.
The filtration process is done automatically. There is no involvement of humans.
- Step 3: Numerical Modification: The subsequent phase of the algorithm focuses
on numerical modification to introduce variability in the dataset. This involves random replacement of numerical values, where randomly generated integers replace
integer values, and float numbers are replaced by randomly generated float numbers.
Crucially, to maintain consistency, the algorithm ensures that modifications applied
to numerical values are consistently propagated to the corresponding equations
and answers. Equations and answers are reconstructed to preserve the logical and
mathematical integrity of the examples.
Note that, changing the numerical values is significant for several reasons. While
introducing quantity-agnostic tags/tokens could be an alternative approach, changing the numerical values directly offers several advantages. Firstly, altering the
numerical values allows for generating a more diverse set of problem equations/answers. This variation is crucial for training models to handle various scenarios.
Secondly, changing the numerical values helps prevent the model from memorizing specific samples, equations/answers and encourages it to learn underlying
patterns and relationships. This enables better generalization to unseen data and
improves the model’s robustness. While quantity-agnostic tags/tokens may offer
some advantages, changing the numerical values directly remains a salient approach
for enhancing the diversity, generalization, and coherence of generated MWPs.
The augmented dataset comprises the modified examples with the original examples. Through this new approach, our study offers a nuanced and practical approach to
-----
enhance the quality of rephrased examples in MWPs. The prompts used to generate the
rephrased versions of the problems are provided in the appendices for reproducibility
purposes (see Appendix A).
**3.2 Rule-Based: Reversing Question**
Liu et al. introduced the concept of using inverted versions of problem statements
in MWPs [52]. Additionally, it is important to mention that this methodology has
been investigated in other studies as well [18, 53].A similar approach is adopted in
the previous study [27]. The actual problem text has been altered to make additional
samples. The question and valid answer from the problem text generate a renewed
supporting sentence. Then, a randomly picked sentence retaining a numerical value
is converted into a new problem text, with the numerical value as the solution. This
procedure forms new samples and assures contextual relevance. All possible equations
related to the numerical values in the question text are created, increasing the data
with various perspectives and helping the model to adapt to diverse challenges.
**3.3 Rule-Based: Question Replacement**
The previous study modified the question text to generate diverse problem types [27].
In this method, key phrases such as “How many” and “How much” in the original
problem are replaced with “What is x/y of,” where x and y vary from 1 to 10. After
this modification, a T5-Based model fine-tuned for grammar checking used to generate a modified version of the question text while maintaining consistency among the
question, equation, and answer. This technique presented complexity by producing
different texts for the same underlying question, diversifying the dataset, and preventing direct answer mapping. By incorporating such variations, the model becomes more
robust at handling various problem formulations and answer types.
**3.4 Substitution: Synonym Replacement**
This method introduced synonyms from the NLTK WordNet to the original problems,
adding semantic variations without varying the mathematical logic [27]. By randomly
picking terms for replacement, the method increases the vocabulary of the problem
text, contributing to a more diverse dataset. Using NLTK WordNet ensures that the
selected synonyms are contextually pertinent, preserving the mathematical reasoning
in the original question text. This strategy desires to create a dataset with various
linguistic expressions while protecting the underlying mathematical structure in MWP
solving.
Table 1 shows examples of the proposed augmentation techniques on MWP. It
includes the original text, equation, and answer, followed by the samples generated by
proposed augmentation methods involving synonym replacement, rule based: reversing
question, rule-based: question replacement, and in-context learning, and the corresponding equations and the answers. The modifications made to the original example
are emphasized for each proposed augmentation method.
-----
Text Equation Answer
Fred had 7 dimes in his bank . His sister borrowed
Original X=7-3 4
3 of his dimes . How many dimes does Fred have now?
Fred had 7 dimes in his depository financial institution .
His Sister lent 3 of his dimes. How many dimes
does Fred have now?
Substitution:
Synonym
Replacement
Rule based:
Reversing
Question
Rule based:
Question
Replacement
Rephrase with
In-Context
Learning
X=7-3
Fred has 4 dimes now. His sister borrowed 3 of
4=X-3 7
his dimes. How many dimes in his bank did Fred have?
Fred had 7 dimes in his bank. His sister borrowed
3 of his dimes. What is 9/10 of all the dimes Fred X=(7-3) *(9/10) 3.6
has now?
Fred initially had 23 dimes in his bank, but after
his sister borrowed 9 dimes, how many dimes does Fred
have remaining?
X=23-9 14
**Table 1 Examples from augmentation methods**
### 4 Experiments
**4.1 Datasets and Models**
**Datasets: The following datasets are used in the experiments:**
- MAWPS-Single: MAWPS (MAth Word ProblemS) is a framework for building
an online repository of MWPs, offering a diverse collection of MWPs along with
their answers and equation templates [17]. MAWPS is designed to be a comprehensive and customizable resource for evaluating various algorithms. In the context
of MAWPS and its sub-datasets, the term “single equation” indicates that the
problems involve mathematical expressions with a single unknown variable. The
MAWPS-Single subset is designed to perform experiments or evaluations requiring
or emphasizing one unknown problem type in MWPs. This subset comprises 1,987
MWPs, each featuring a single equation.
- SVAMP: SVAMP is a challenging dataset strategically collected from existing
datasets with a specific focus on addressing the interesting observation that certain
word problems can be solved without the complete problem text [18]. The creation of
SVAMP involved the application of various modifications to seed examples sourced
from the ASDiv-A dataset [19]. This choice was driven by the perceived higher
quality and increased difficulty of ASDiv-A compared to the MAWPS dataset. This
dataset comprised 3,138 problems in the training set and 1,000 problems in the test
set.
These two datasets comprise the basis for the experiments, providing a varied and
specialized set of MWPs for analyzing and evaluating algorithms in MWP solving.
In addition to the previously mentioned augmentation sets, including the original
examples, 4 additional trainsets have been constructed: Combined V1, Combined V2,
Combined V3, and Combined V4. These sets are formulated by combining the samples
generated via various augmentation methods.
-----
Rephrase with
In-Context
Learning
Question
trainset Reversing
Repl.
Question
Synonym
Repl.
Combined Combined Combined Combined
V1 V2 V3 V4
asdiv-a
3138 5998 5553 6274 5744 11545 16810 14244 19509
SVAMP
MAWPS-Single 1589 3043 2557 3178 2996 5599 8022 7080 9503
**Table 2 Number of training instances utilized for the proposed methods across the two datasets**
- Combined V1 contains instances generated by Rule-Based procedures such as
Question Replacement and Reversing Questions, along with the actual training
samples.
- Combined V2 comprises instances created by Rule-Based methods and actual training samples. Then, Substitution: Synonym Replacement approach is applied to all
those examples, so it comprises more instances than Combined V1.
- Combined V3 contains all examples from Combined V1 and adds examples generated by the in-context learning augmentation approach, ensuring no duplication of
the original training data.
- Finally, Combined V4 includes all examples from Combined V3 and incorporates
additional samples generated through the in-context augmentation approach, also
ensuring that there is no duplication of the original training data.
Table 2 shows the number of training instances utilized for the proposed methods across the two datasets, including the combined sets. Having diversified sets
aims to enhance the robustness and generalization of the training data for improved
performance in MWP solving.
**Models: This study assesses and compares diverse neural models for MWP solv-**
ing. A comparative table of these models is given in Table 3. Each model is designed
to overcome distinct challenges in the domain. GTS presents a novel tree-structured
model that works goal-driven, recursively decomposing the problem into sub-goals
until getting a known quantity. DNS converts MWPs directly into equation templates
using an RNN, improving performance with a hybrid model integrating a similaritybased retrieval model. RobertaGen, built upon RoBERTa, makes a powerful tool for a
nuanced understanding of mathematical complexities. RNNVAE leverages directions
from variational autoencoders and RNNs, incorporating VAE’s text modeling with
RNN’s sequential processing to obtain accurate MWP answers.
Moreover, Graph2Tree addresses challenges in capturing relationships between
quantities in tree-based neural models. MathEN focuses on the issue of multiple correct equations in MWPs. It introduces an ensemble model integrating the strengths
of individual Seq2Seq models. Saligned utilizes a neural encoder-decoder framework
focused on the semantic meanings of symbols in the text. MWPBert, a BERT-based
model, addresses number representation challenges by injecting numerical properties
into symbolic placeholders. SAUSolver generates universal expression trees based on
the semantic meanings of previously generated symbols.
The models employed in this study can be classified based on their architectures: GTS, MWPBert, and SAUSolver serve as Seq2Tree models; RNNVAE,
DNS, MathEN, and Saligned utilize Seq2Seq architectures; Graph2Tree is specifically
designed as a Graph2Tree model, and RobertaGen stands out as a pre-trained model.
10
-----
Pretrained
Model Year Type Description Encoder Decoder
model
Hybrid variational encoder-decoder
system integrating VAE and RNN principles.
Leverages VAE strengths in text modeling
and sequential processing abilities of RNNs.
Deep neural solver for MWPs.
Utilizes a hybrid model with a
similarity-based retrieval model for
performance improvements.
Addresses the issue of multiple correct
equations in Seq2Seq models for MWPs.
Introduces an equation normalization
method.
RNNVAE [34] 2016 Seq2Seq
DNS [20] 2017 Seq2Seq
MathEN [26] 2018 Seq2Seq
LSTM LSTM
GRU LSTM
BiLSTM LSTM
Goal-driven tree-structured model for MWP
GTS [8] 2019 Seq2Tree solving. Uses two-layer gated feed forward GRU TreeDecoder -
networks for goal decomposition.
Advanced language model based
RobertaGen [41] 2019 Pre-trained on RoBERTa. Specializes in MWPs excelling RoBERTa Transformer RoBERTa
in contextual and semantic understanding.
Neural encoder-decoder framework
Saligned [36] 2019 Seq2Seq for MWP solving. Focuses on the semantic BiLSTM LSTM -
meanings of symbols in the text.
LSTM
+
Graph
Convolutional
Networks
Deep learning architecture for improved
solution equation generation in MWPs.
Uses Quantity Cell and Quantity Comparison
Graphs for representation.
Semantically aligned universal tree structured
solver. Encoder-decoder framework generating
a universal expression tree based on semantic
meanings.
Graph2Tree [12] 2020 Graph2Tree
SAUSolver [15] 2020 Seq2Tree
TreeDecoder
GRU TreeDecoder
Addresses the issue of number representation
MWPBert [54] 2021 Seq2Tree in MWP solving. BERT-based model injecting Bert TreeDecoder Bert
numerical properties into symbolic placeholders.
**Table 3 Comparison of MAWP Solving Models**
This categorization emphasizes the study’s diverse architectural strategies for MWP
solving.
**4.2 Experimental Results**
Our experiments mainly focus on the MAWPS-Single and SVAMP datasets, which
are widely used one-unknown MWP datasets. The performances of the conducted
experiments on MAWPS-Single and SVAMP datasets are illustrated in Table 4 and
5. Rows and columns describe models and presented augmentation strategies. Results
are underlined and bold for instances where the augmentation technique performs less
than the reproduced performances. Further, the highest two performances on the test
set are highlighted in red.
We employ two evaluation metrics, equation accuracy and answer accuracy, to
assess the performance of the models. Equation accuracy determines whether the predicted solution equation matches the expected equation precisely. Moreover, answer
accuracy evaluates the exact matches between predicted and expected answers. By
utilizing both metrics, we comprehensively assess the effectiveness of our models in
generating accurate solutions to MWPs. In addition, our choice of accuracy for comparison aligns with the evaluation metric used by state-of-the-art baseline models in
the field. This ensures a direct and fair comparison between our proposed methods
and existing approaches.
Our study employed proposed augmentation methods across various models,
including 4 Seq2Seq, 3 Seq2Tree, 1 Graph2Tree, and 1 RobertaGen model. In our
11
-----
Rephrase
in
context
learning
MAWPS-Single in [Acc29] Reproduced QuestionRepl. ReversingQuestion SynonymRepl.
Combined Combined Combined Combined
V1 V2 V3 V4
equacc 78,9 79,4 79,4 79,4 79,4 79,9 81,4 80,4 79,9 80,9
DNS accval 86,3 88,4 **87,9** **87,9** **87,9** **87,9** 88,9 88,9 **87,4** **87,4**
equacc 85,9 84,4 87,9 85,9 86,9 85,9 87,9 86,9 86,4 86,4
MathEN accval 86,4 85,9 88,9 86,9 87,9 86,9 88,9 87,9 87,4 87,4
equacc 86 77,4 78,4 79,9 78,4 77,4 80,9 79,9 80,4 78,4
Saligned accval 86,3 85,4 88,9 87,9 87,9 85,9 88,9 87,9 87,9 86,4
equacc 79,8 87,9 88,9 **87,4** **87,4** 87,9 88,9 89,4 87,9 89,4
RNNVAE accval 88,2 88,9 89,9 88,9 **88,4** 88,9 89,9 90,5 88,9 89,9
equacc - 84,4 86,4 **82,4** 86,4 86,4 86,9 85,9 86,9 88,4
MWPBert accval - 84,9 86,9 **82,9** 86,9 86,9 87,4 86,4 86,9 88,4
equacc 83,4 83,9 85,9 86,4 84,9 85,9 85,4 85,4 85,9 86,4
SAUSolver accval 84 84,9 86,4 86,9 85,4 86,4 85,9 85,9 86,4 86,4
equacc 86 77,4 78,4 79,9 78,4 77,4 80,9 79,9 80,4 78,4
equacc 79,8 87,9 88,9 **87,4** **87,4** 87,9 88,9 89,4 87,9 89,4
equacc 83,4 83,9 85,9 86,4 84,9 85,9 85,4 85,4 85,9 86,4
equacc 83,5 83,9 86,9 84,9 85,9 87,4 85,4 84,9 85,4 87,4
GTS accval 84,1 84,4 87,4 85,4 86,4 87,9 85,9 85,4 85,9 87,9
Graph2Tree equacc 84,9 84,9 86,4 84,9 87,4 86,4 86,9 86,4 88,4 87,9
Graph2Tree accval 85,6 85,4 86,9 85,4 87,9 85,4 87,4 86,9 89,4 88,9
equacc 80,8 83,4 85,9 84,9 84,4 84,4 83,9 84,4 85,9 85,4
Pre-trained RobertaGen accval 88,4 84,9 86,9 85,9 84,9 85,4 **84,4** 85,9 86,9 85,9
**Table 4 Experiments on MAWPS-Single dataset**
analysis, we compared the obtained performances with the baseline in [29]. Through
a comparison of these performances, the following outcomes can be inferred.
- In general, the proposed data augmentation methods performed better across nine
distinct models.
- The combined augmentation sets mostly outperformed other individual methods.
- On the MAWPS-Single dataset,
– The Seq2Seq models revealed their highest performances mainly over Combined
V1.
– Seq2Tree, Graph2Tree, and pre-trained models performed better when trained on
Combined V3.
- Similarly, for the Svamp dataset, the superiority of combined datasets, specifically
Combined v4, was evident as it consistently achieved the highest performances.
Moreover, Table 6 provides a detailed comparison of the performance of different
augmentation approaches across datasets and evaluation metrics. Each row corresponds to a specific dataset with an evaluation metric, while the columns represent
different augmentation approaches employed. Each cell in the table contains a value
indicating how each augmentation approach performs relative to baseline methods.
For instance, the value “3/9” in a cell indicates that the corresponding augmentation approach outperformed the baseline in three out of nine models considered for
evaluation. According to this table:
12
-----
Rephrase
in
context
learning
SVAMP in [Acc29] Reproduced QuestionRepl. ReversingQuestion SynonymRepl.
Combined Combined Combined Combined
V1 V2 V3 V4
equacc 22.1 19 23,7 20,2 23,1 21,2 23,9 23,6 22,5 23,3
DNS accval 24.2 22,6 26,8 23,6 26,8 23,5 26,9 26,5 25,2 26,2
equacc 21.8 23,7 25,1 24,8 24,3 **23,2** 25,1 25,7 26,5 26,3
MathEN accval 25.0 24,3 25,6 25,2 24,9 **23,9** 25,6 26,3 27,3 27
equacc 23.9 24,1 24,3 **21,6** 24,5 24,5 25 26 **24** 25,5
Saligned accval 26.1 27,2 **27** **24,6** 27,6 27,6 28,5 28,5 27,9 28,7
equacc 23.2 21,7 24,1 23,4 22,6 23,6 24,6 23,4 25,2 24,7
RNNVAE accval 25.9 25,4 27,2 26,2 26,7 26,9 28,3 27,4 28,4 28,5
equacc - 24,4 25,1 **24,2** 26,6 28,5 26 27,4 27,4 27,7
MWPBert accval - 26,6 28,5 **26** 29,4 32,8 28,3 30,1 31,1 31,4
equacc 27.1 25,1 26 **24,3** **25** 25,7 26,9 25,8 26,3 27
SAUSolver accval 29.7 27,6 28,3 **27** 28,3 28,5 30,2 28,4 29,1 29
equacc 23.9 24,1 24,3 **21,6** 24,5 24,5 25 26 **24** 25,5
equacc 23.2 21,7 24,1 23,4 22,6 23,6 24,6 23,4 25,2 24,7
equacc 27.1 25,1 26 **24,3** **25** 25,7 26,9 25,8 26,3 27
equacc 25.6 25,3 26,7 **24,2** 25,5 25,9 26 25,5 26,3 27
GTS accval 29.1 28,5 29 **27,2** 28,6 28,9 29,1 28,4 29,4 29
equacc 31.6 32,5 33,6 **31,4** 32,6 34 34 33,2 **31** **31,4**
Graph2Tree Graph2Tree accval 35.0 35,3 36,7 **34,7** 35,7 35 36,8 36,6 **33,2** **33**
equacc 27.9 20,9 21,9 **20,8** 22,1 21,8 21,9 21,5 23,4 **20,6**
Pre-trained RobertaGen accval 30.3 24 24,4 **23,3** 24,9 25,1 25 24 26,3 24,4
**Table 5 Experiments on SVAMP dataset**
Question Reversing Synonym
Repl. Question Repl.
Rephrase
with
In-Context
Learning
Combined Combined Combined Combined
V1 V2 V3 V4
equ
9/9 3/9 8/9 8/9 9/9 9/9 7/9 7/9
acc
SVAMP val
8/9 3/9 9/9 8/9 9/9 9/9 8/9 7/9
acc
equ
9/9 7/9 7/9 9/9 9/9 9/9 9/9 9/9
acc
MAWPS-Single val
8/9 7/9 8/9 8/9 8/9 9/9 8/9 8/9
acc
**Table 6 Performance Comparison of the Models with Various Augmentation Techniques**
- The models generally performed well with Rule Based: Question Replacement,
Synonym Replacement, and augmentation with in-context learning approaches,
achieving 7/9, 8/9, and 9/9 in both equation and value accuracy.
- Training with Combined V2, the models consistently demonstrated strong performance with 9/9 in both equation and value accuracy.
- Rule-based: Reversing Question method had a mixed impact on model performance.
Of the 9 models examined, 3 displayed superior performance, outperforming the
reproduced performances. This observation underscores the importance of understanding the impact of augmentation techniques, on model outcomes for the SVAMP
dataset.
In summary, experimental results highlight the significance of dataset composition
and augmentation strategies on various model performances.
13
-----
### 5 Discussion
Our proposed methods address several challenges inherent in MWPs augmentation.
Synonym replacement ensures that the meaning and context of the original problem statements are maintained by introducing variation while preserving the semantic
meaning of the original problems. Similarly, rule-based question replacement and
reversing methodologies are applied strategically to ensure the generation of diverse
and contextually relevant problem statements, thus maintaining the semantic integrity
of the dataset. The in-context learning based approach ensures that generated problem statements are contextually relevant, preserving semantic integrity. Moreover, our
proposed methods introduce diversity into the dataset, thereby reducing the risk of
model bias. By incorporating diverse synonyms and applying predefined rules, we
expose models to a broader range of examples, reducing the risk of bias in the training data, especially on concatenated sets. Similarly, by leveraging the capabilities of
the Llama-7b language model in in-context learning based approach, the risk of model
bias is reduced.
The methodologies of Synonym Replacement, Rule-Based Question Replacement,
and Rule-Based Reversing Question are employed in a controlled manner to maintain
uniformity in the textual information while preserving mathematical relationships.
These methods ensure that the semantic integrity of the dataset is preserved by introducing variation while maintaining coherence in the problem statements. Additionally,
these techniques apply no numerical modifications, ensuring that the numerical
information remains consistent across the dataset. In in-context learning approach,
numerical modifications are applied, but so in a controlled manner. This ensures that
changes to numerical information are uniform and coherent, preserving the underlying
mathematical relationships.
In in-context learning based approach, we utilize prompt-based generation using
an LLM. These models are trained on extensive datasets, which helps ensure their
semantic integrity and robustness. While we acknowledge the importance of consistency in the rephrased statements with the equations and answers, we rely on the
inherent capabilities of the LLM to maintain semantic coherence during the generation
process. Additionally, the problem texts provided to the models are simple, enhancing the likelihood of consistent and accurate rephrased statements aligning with the
equations and answers.
To ensure that our proposed augmentation approaches are not solely relying on
shallow heuristics or memorizing augmented sample templates to achieve improved
accuracy,
- The proposed methods are evaluated on multiple datasets to assess their ability to
generalize across different problem domains and variations.
- The experiments are conducted on 9 baseline models, which serve as benchmarks
for comparison. This allows us to assess the relative performance of our approaches
and verify that any improvements in accuracy are not simply due to overfitting or
memorization.
14
-----
- The diversity of our training data is enhanced by concatenating examples generated
by different proposed methods. Increasing the variety of training examples encourages the models to learn robust and generalizable patterns, rather than relying on
memorized templates or shallow heuristics.
We have also thoroughly assessed the efficacy of our proposed in-context learning
approach using the MWPBert model. The examples provided in Table 7 demonstrate a set of MWPs samples that were initially incorrectly solved by the MWPBert
model prior to its training with augmented samples. However, after training with the
augmented samples generated through the in-context learning approach, the model
successfully generates correct answers. This improvement provides the claim that our
augmentation approach enhances the model’s reasoning ability when solving MWPs.
**Table 7 The MWP samples were incorrectly solved by MWPBert, correctly solved after**
training with in-context learning based approach
|Question|Equation|Answer|
|---|---|---|
|There are 20 different books in the ’crazy silly school’ series. If you are yet to read 5 of the books, how many books have you already read?|X=(20.0 - 5.0)|15|
|Paul got a box of 440 crayons for his birthday. During the school year he gave 111 crayons to his friends while he lost 106 crayons. How many crayons did he have left?|X=(440.0 - (111.0 + 106.0))|223|
|Bobby had 21 pieces of candy. He ate 5 pieces of candy. Then he ate 9 more. How many pieces of candy does he still have left?|X=(21.0 - (5.0 + 9.0))|7|
|Dan has $4. He bought a candy bar for $7 and a chocolate for $6. How much money did he spend buying the candy bar and chocolate?|X=(7.0 + 6.0)|13|
### 6 Conclusion and Future Work
In conclusion, addressing the challenges posed by MWP solving in NLP is critical for
improving the performance of existing models. The need for improved generalization
highlights the significance of this study’s objective to enhance MWP datasets with
high-quality data. While this study’s primary focus is on English, the study points to
the potential generalization of the presented ideas to other languages in MWP solving.
Several data augmentation techniques have been introduced and tested on MAWPSSingle and SVAMP datasets. The experiment results demonstrated the progress in
performance on both datasets.
In the future, we aim to investigate adversarial MWPs as part of our ongoing
efforts, to improve the robustness and performance of our approach. We will also
explore the effectiveness of our in-context learning based augmentation approach with
various other LLMs such as GPT-3 (ada, cabbage, curie, davinci), GPT-3.5 etc., as
part of our future work.
15
-----
**Acknowledgments.** This research is supported by The Scientific and Technological
Research Council of Turkey (TUB[¨] ITAK) in part of the project with 120E100 grant[˙]
number. G.Yigit is supported by TUBITAK - B[˙] IDEB 2211/A national fellowship[˙]
program for Ph.D. studies.
**Declarations** The authors declare that they have no conflict of interest.
**Data Availability.** Data sharing not applicable to this article as no datasets were
generated or analyzed during the current study.
### Appendix A
References
[1] Yigit, G., Amasyali, M.F.: Enhancing multiple-choice question answering through
sequential fine-tuning and curriculum learning strategies. Knowledge and Information Systems, 1–18 (2023)
[2] Wu, L., Wu, P., Zhang, X.: A seq2seq-based approach to question answering over
knowledge bases. In: Semantic Technology: 9th Joint International Conference,
JIST 2019, Hangzhou, China, November 25–27, 2019, Revised Selected Papers 9,
pp. 170–181 (2020). Springer
[3] Fan, A., Jernite, Y., Perez, E., Grangier, D., Weston, J., Auli, M.: Eli5: Long
form question answering. arXiv preprint arXiv:1907.09190 (2019)
[4] Jin, S., Lian, X., Jung, H., Park, J., Suh, J.: Building a deep learning-based qa
system from a cqa dataset. Decision Support Systems, 114038 (2023)
[5] Abdel-Nabi, H., Awajan, A., Ali, M.Z.: Deep learning-based question answering:
a survey. Knowledge and Information Systems 65(4), 1399–1485 (2023)
[6] Rogers, A., Gardner, M., Augenstein, I.: Qa dataset explosion: A taxonomy of nlp
resources for question answering and reading comprehension. ACM Computing
Surveys 55(10), 1–45 (2023)
[7] Yigit, G., Amasyali, M.F.: Ask me: A question answering system via dynamic
memory networks. In: 2019 Innovations in Intelligent Systems and Applications
Conference (ASYU), pp. 1–5 (2019). IEEE
[8] Xie, Z., Sun, S.: A goal-driven tree-structured neural model for math word
problems. In: IJCAI, pp. 5299–5305 (2019)
[9] Zhang, J., Lee, R.K.-W., Lim, E.-P., Qin, W., Wang, L., Shao, J., Sun, Q.:
Teacher-student networks with multiple decoders for solving math word problem.
(2020). IJCAI
16
-----
[10] Liang, Z., Zhang, J., Wang, L., Qin, W., Lan, Y., Shao, J., Zhang, X.: Mwp-bert:
Numeracy-augmented pre-training for math word problem solving. In: Findings
of the Association for Computational Linguistics: NAACL 2022, pp. 997–1009
(2022)
[11] Wang, L., Zhang, D., Zhang, J., Xu, X., Gao, L., Dai, B.T., Shen, H.T.: Templatebased math word problem solvers with recursive neural networks. In: Proceedings
of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 7144–7151 (2019)
[12] Zhang, J., Wang, L., Lee, R.K.-W., Bin, Y., Wang, Y., Shao, J., Lim, E.-P.:
Graph-to-tree learning for solving math word problems. (2020). Association for
Computational Linguistics
[13] Shen, Y., Jin, C.: Solving math word problems with multi-encoders and multidecoders. In: Proceedings of the 28th International Conference on Computational
Linguistics, pp. 2924–2934 (2020)
[14] Upadhyay, S., Chang, M.-W.: Annotating derivations: A new evaluation strategy
and dataset for algebra word problems. arXiv preprint arXiv:1609.07197 (2016)
[15] Qin, J., Lin, L., Liang, X., Zhang, R., Lin, L.: Semantically-aligned universal
tree-structured solver for math word problems. arXiv preprint arXiv:2010.06823
(2020)
[16] Kushman, N., Artzi, Y., Zettlemoyer, L., Barzilay, R.: Learning to automatically
solve algebra word problems. In: Proceedings of the 52nd Annual Meeting of the
Association for Computational Linguistics (Volume 1: Long Papers), pp. 271–281
(2014)
[17] Koncel-Kedziorski, R., Roy, S., Amini, A., Kushman, N., Hajishirzi, H.: Mawps:
A math word problem repository. In: Proceedings of the 2016 Conference of
the North American Chapter of the Association for Computational Linguistics:
Human Language Technologies, pp. 1152–1157 (2016)
[18] Patel, A., Bhattamishra, S., Goyal, N.: Are nlp models really able to solve simple
math word problems? arXiv preprint arXiv:2103.07191 (2021)
[19] Miao, S.-Y., Liang, C.-C., Su, K.-Y.: A diverse corpus for evaluating and
developing english math word problem solvers. arXiv preprint arXiv:2106.15772
(2021)
[20] Wang, Y., Liu, X., Shi, S.: Deep neural solver for math word problems. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language
Processing, pp. 845–854 (2017)
[21] Roy, S., Roth, D.: Mapping to declarative knowledge for word problem solving.
Transactions of the Association for Computational Linguistics 6, 159–172 (2018)
17
-----
[22] Mitra, A., Baral, C.: Learning to use formulas to solve simple arithmetic problems.
In: Proceedings of the 54th Annual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pp. 2144–2153 (2016)
[23] Fletcher, C.R.: Understanding and solving arithmetic word problems: A computer
simulation. Behavior Research Methods, Instruments, & Computers 17(5), 565–
571 (1985)
[24] Bakman, Y.: Robust understanding of word problems with extraneous information. arXiv preprint math/0701393 (2007)
[25] Yuhui, M., Ying, Z., Guangzuo, C., Yun, R., Ronghuai, H.: Frame-based calculus
of solving arithmetic multi-step addition and subtraction word problems. In: 2010
Second International Workshop on Education Technology and Computer Science,
vol. 2, pp. 476–479 (2010). IEEE
[26] Wang, L., Wang, Y., Cai, D., Zhang, D., Liu, X.: Translating a math word problem
to an expression tree. arXiv preprint arXiv:1811.05632 (2018)
[27] Yigit, G., Amasyali, M.F.: Exploring the benefits of data augmentation in
math word problem solving. In: 2023 International Conference on Innovations in
Intelligent Systems and Applications (INISTA), pp. 1–6 (2023). IEEE
[28] Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T.,
Rozi`ere, B., Goyal, N., Hambro, E., Azhar, F., et al.: Llama: Open and efficient
foundation language models. arXiv preprint arXiv:2302.13971 (2023)
[29] Lan, Y., Wang, L., Zhang, Q., Lan, Y., Dai, B.T., Wang, Y., Zhang, D., Lim,
E.-P.: Mwptoolkit: an open-source framework for deep learning-based math word
problem solvers. In: Proceedings of the AAAI Conference on Artificial Intelligence,
vol. 36, pp. 13188–13190 (2022)
[30] Hosseini, M.J., Hajishirzi, H., Etzioni, O., Kushman, N.: Learning to solve
arithmetic word problems with verb categorization. In: EMNLP, pp. 523–533
(2014)
[31] Zhou, L., Dai, S., Chen, L.: Learn to solve algebra word problems using quadratic
programming. In: Proceedings of the 2015 Conference on Empirical Methods in
Natural Language Processing, pp. 817–822 (2015)
[32] Koncel-Kedziorski, R., Hajishirzi, H., Sabharwal, A., Etzioni, O., Ang, S.D.: Parsing algebraic word problems into equations. Transactions of the Association for
Computational Linguistics 3, 585–597 (2015)
[33] Huang, D., Shi, S., Lin, C.-Y., Yin, J.: Learning fine-grained expressions to
solve math word problems. In: Proceedings of the 2017 Conference on Empirical
Methods in Natural Language Processing, pp. 805–814 (2017)
18
-----
[34] Zhang, B., Xiong, D., Su, J., Duan, H., Zhang, M.: Variational neural machine
translation. arXiv preprint arXiv:1605.07869 (2016)
[35] Huang, D., Liu, J., Lin, C.-Y., Yin, J.: Neural math word problem solver with
reinforcement learning. In: Proceedings of the 27th International Conference on
Computational Linguistics, pp. 213–223 (2018)
[36] Chiang, T.-R., Chen, Y.-N.: Semantically-aligned equation generation for solving
and reasoning math word problems. arXiv preprint arXiv:1811.00720 (2018)
[37] Li, J., Wang, L., Zhang, J., Wang, Y., Dai, B.T., Zhang, D.: Modeling intrarelation in math word problems with different functional multi-head attentions.
In: Proceedings of the 57th Annual Meeting of the Association for Computational
Linguistics, pp. 6162–6167 (2019)
[38] Meng, Y., Rumshisky, A.: Solving math word problems with double-decoder
transformer. arXiv preprint arXiv:1908.10924 (2019)
[39] Li, S., Wu, L., Feng, S., Xu, F., Xu, F., Zhong, S.: Graph-to-tree neural networks
for learning structured input-output translation with applications to semantic
parsing and math word problem. arXiv preprint arXiv:2004.13781 (2020)
[40] Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: Pre-training of
deep bidirectional transformers for language understanding. arXiv preprint
arXiv:1810.04805 (2018)
[41] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M.,
Zettlemoyer, L., Stoyanov, V.: Roberta: A robustly optimized bert pretraining
approach (2019)
[42] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language
models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)
[43] Shao, Z., Huang, F., Huang, M.: Chaining simultaneous thoughts for numerical
reasoning. arXiv preprint arXiv:2211.16482 (2022)
[44] Li, Y., Lin, Z., Zhang, S., Fu, Q., Chen, B., Lou, J.-G., Chen, W.: On the advance
of making language models better reasoners. arXiv preprint arXiv:2206.02336
(2022)
[45] Wang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., Chowdhery,
A., Zhou, D.: Self-consistency improves chain of thought reasoning in language
models. arXiv preprint arXiv:2203.11171 (2022)
[46] Pi, X., Liu, Q., Chen, B., Ziyadi, M., Lin, Z., Fu, Q., Gao, Y., Lou, J.-G., Chen,
W.: Reasoning like program executors. arXiv preprint arXiv:2201.11473 (2022)
[47] Chen, W., Ma, X., Wang, X., Cohen, W.W.: Program of thoughts prompting:
19
-----
Disentangling computation from reasoning for numerical reasoning tasks. arXiv
preprint arXiv:2211.12588 (2022)
[48] Liang, Z., Yu, W., Rajpurohit, T., Clark, P., Zhang, X., Kaylan, A.: Let gpt
be a math tutor: Teaching math word problem solvers with customized exercise
generation. arXiv preprint arXiv:2305.14386 (2023)
[49] Lazaridou, A., Gribovskaya, E., Stokowiec, W., Grigorev, N.: Internet-augmented
language models through few-shot prompting for open-domain question answering. arXiv preprint arXiv:2203.05115 (2022)
[50] Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q.V., Zhou,
D., et al.: Chain-of-thought prompting elicits reasoning in large language models.
Advances in Neural Information Processing Systems 35, 24824–24837 (2022)
[51] Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are
few-shot learners. arXiv preprint arXiv:2005.14165 (2020)
[52] Liu, Q., Guan, W., Li, S., Cheng, F., Kawahara, D., Kurohashi, S.: Roda: reverse
operation based data augmentation for solving math word problems. IEEE/ACM
Transactions on Audio, Speech, and Language Processing 30, 1–11 (2021)
[53] Raiyan, S.R., Faiyaz, M.N., Kabir, S.M.J., Kabir, M., Mahmud, H., Hasan,
M.K.: Math word problem solving by generating linguistic variants of problem
statements. arXiv preprint arXiv:2306.13899 (2023)
[54] Liang, Z., Zhang, J., Wang, L., Qin, W., Lan, Y., Shao, J., Zhang, X.: Mwpbert: Numeracy-augmented pre-training for math word problem solving. arXiv
preprint arXiv:2107.13435 (2021)
20
-----
**Table A1 Prompts employed to generate rephrased versions of problem texts within the**
MAWPs-Single dataset
Your task is to rephrase the given texts while preserving the numerical values and
relationships inherent in the original statements.
**Text: A store had 27 coloring books in stock. They ended up putting them on sale**
and getting rid of 6 of them. The put the ones they still had onto shelves with 7 on
each shelf. How many shelves did they use?
**Rephrased: In stock, there were 27 coloring books at a local store. During a sale,**
6 were sold, and the remaining ones were neatly arranged on shelves, with 7 books
on each shelf. How many shelves were utilized for this arrangement?
**Text: Shawn has 13 blocks. Mildred has with 2 blocks. Mildred finds another 84.**
How many blocks does Mildred end with?
**Rephrased: Shawn possesses 13 blocks, while Mildred starts with 2. If Mildred**
discovers an additional 84 blocks, how many blocks does Mildred have in total?
**Text: Melanie grew 139 turnips. Benny grew 113 turnips. How many turnips did**
they grow in all ?
**Rephrased: Melanie cultivated 139 turnips, and Benny grew 113. How many turnips**
did they grow in total?
**Text: A teacher had 29 worksheets to grade. If she graded 25, but then another 29**
were turned in, how many worksheets would she have to grade?
**Rephrased: A teacher had 29 worksheets to grade. After grading 25, an additional**
29 were turned in. How many more worksheets does the teacher need to grade?
**Text: A pet store has 6 bird cages. If each cage has 2 parrots and 7 parakeets in it,**
how many birds does the pet store have total?
**Rephrased: In a pet store, there are 6 bird cages. If each cage contains 2 parrots and**
7 parakeets, how many birds are there in total?
**Text: A restaurant served 6 cakes during lunch and 9 during dinner today . How**
many cakes were served today ?
**Rephrased: Today, a restaurant served 6 cakes during lunch and 9 during dinner.**
How many cakes were served in total?
**Text: Jessica spent 10.22onacattoy, andacagecosther 11.73 . What was**
the total cost of Jessica ’s purchases ?
**Rephrased: Jessica spent 10.22onacattoy, andacagecosther11.73. What was**
the total cost of Jessica’s purchases?
**Text: Sam had to wash 40 short sleeve shirts and 23 long sleeve shirts before school.**
If he had only washed 29 of them by the time school started, how many did he not wash?
**Rephrased: Sam had 40 short sleeve shirts and 23 long sleeve shirts to wash before school.**
If he had only washed 29 of them by the time school started, how many shirts did he not
wash?
**Text: George had 30 dollars. For his birthday he got 16 more dollars but spent 38 on a**
new game. How much money does he have now?
**Rephrased: For his birthday, George received 16morebutspent38 on a new game.**
How much money does he have now?
**Text: A pet store had 41 siamese cats and 28 house cats. During a sale they sold 15**
cats. How many cats do they have left?
**Rephrased: In a pet store, there were 41 Siamese cats and 28 house cats. During a sale,**
15 cats were sold. How many cats are left in the store?
**Text: Greg and Sharon own neighboring cornfields. Greg harvested 0.4 of an acre of**
corn on Monday and Sharon harvested 0.1 of an acre. How many more acres did Greg
harvest than Sharon?
**Rephrased: Greg and Sharon, owners of neighboring cornfields, embarked on their**
harvest. Greg gathered 0.4 acres of corn on Monday, while Sharon reaped 0.1 acres.
What is the difference in the number of acres Greg harvested compared to Sharon?
**Text: A painter needed to paint 6 rooms in a building. Each room takes 5 hours to**
paint. If he already painted 2 rooms, how much longer will he take to paint the rest?
**Rephrased: Tasked with painting a building’s 6 rooms, a diligent painter spends 5**
hours on each. Having completed 2 rooms, how much additional time will it take for
the painter to finish the remaining ones?
**Text: The Spurs basketball team has 22 players. Each player has 11 basketballs. How**
many basketballs do they have in all? 21
**Rephrased: The Spurs basketball team boasts 22 players, each equipped with 11**
basketballs. What is the total number of basketballs in their possession?
**Text: Haley has 63 magazines in her cabinet. She plans to send it to the recycling**
office in their area. If she places it in boxes which can contain 9 magazines, how
many boxes will Hayley use?
**Rephrased: Haley intends to send 63 magazines from her cabinet to the recycling**
office. If she organizes them into boxes, each capable of holding 9 magazines, how
many boxes will she need?
**Text: Frank worked 8 hours on the first 4 days of the week. How many hours did**
he work in all?
**Rephrased: Throughout the initial 4 days of the week, Frank devoted 8 hours each**
day to work. What is the total number of hours he worked during this period?
**Text: A restaurant served 6 cakes during lunch and 9 during dinner today . How many**
cakes were served today ?
**Rephrased:**
-----
**Table A2 Prompts employed to generate rephrased versions of problem texts within the**
SVAMP dataset
Your task is to rephrase the given texts while preserving the numerical values and
relationships inherent in the original statements.
**Text: While playing a trivia game , Mike answered 3.0 questions correct in the first**
half and 5.0 questions correct in the second half . If each question was worth 3.0
points,what was his final score ?
**Rephrased: Engaged in a trivia game, Mike accurately responded to 3.0 questions**
in the initial half and 5.0 questions in the latter half. If each question carried a value
of 3.0 points, what total score did Mike achieve?
**Text: Sam invited 9.0 friends to a birthday party , but 6.0 could n’t come . If he**
wanted to buy enough cupcakes so each person could have exactly 2.0 , how many
should he buy ?
**Rephrased: Inviting 9.0 friends to a birthday celebration, Sam faced the absence**
of 6.0 attendees. To ensure each person could enjoy exactly 2.0 cupcakes, how
many should Sam purchase?
**Text: Keith grew 29.0 cantelopes , Fred grew 16.0 cantelopes , and Jason grew**
20.0 cantelopes. How many cantelopes did they grow in total ?
**Rephrased: Keith, Fred, and Jason cultivated 29.0, 16.0, and 20.0 cantaloupes,**
respectively. What is the combined count of cantaloupes grown by the three?
**Text: Ezra drew a white line that was 7 inches long . Then he drew**
a blue line that was 3 inches long . How much longer was the white
line than the blue line ?
**Rephrased: Creating drawings, Ezra crafted a white line measuring 7 inches and**
a blue line of 3 inches. What is the difference in length between the white and
blue lines?
**Text: I have a pet golden retriever . Each year he gains 11.0 pounds . He is 8.0**
years old . How many pounds does he weigh ?
**Rephrased: Caring for a golden retriever, he accumulates 11.0 pounds annually.**
At the age of 8.0 years, what is the total weight of the golden retriever?
**Text: finally they had to roam around 169 factories to make sure they are**
throwing their wastes properly . if their group went to 69 factories and the second
went to 52 how many factories remain unchecked ?
**Rephrased: To ensure proper waste disposal, the team needed to inspect 169**
factories. If one group covered 69 factories and another visited 52, how many
factories are yet to be checked?
**Text: There are 96.0 oranges in a box . Jonathan takes 45.0 oranges . How many**
are left ?
**Rephrased: Within a box, there exist 96.0 oranges. Jonathan claims 45.0 of them.**
What is the remaining count?
**Text: james has 1222 balloons . amy has 513 balloons . how many more balloons**
does james have than amy ?
**Rephrased: Possessing 1222 balloons, James exceeds Amy’s collection by how**
many balloons, given that Amy has 513?
**Text: A dealer pays 6000.0 dollars for a car . The dealer wants to make a profit**
that is 25.0 % of the selling price . For how much should the dealer sell the car ?
**Rephrased: If a dealer invests 6000.0 dollars in acquiring a car and aims for a**
profit constituting 25.0% of the selling price, what should be the selling price?
**Text: In fourth grade there were 11.0 students at the start of the year . During the**
year 6.0 students left and 42.0 new students came to school . How many students were
in fourth grade at the end ?
**Rephrased: Commencing fourth grade with 11.0 students, the class experienced**
the departure of 6.0 students and the arrival of 42.0 new students. What is the final
count of students in fourth grade?
**Text: If Joan bicycled 25.0 miles at 5.0 miles per hour , how long was Joan travelling ?**
**Rephrased: Covering a distance of 25.0 miles on a bicycle moving at 5.0 miles per**
hour, what was the duration of Joan’s travel?
**Text: A furniture store has a chair , originally priced at 78.0 dollars , on sale for 46.0**
dollars . What is the percent of decrease , rounded to the nearest tenth ?
**Rephrased: The furniture store offers a chair initially valued at 78.0 dollars for a sale22**
price of 46.0 dollars. What is the percentage decrease, rounded to the nearest tenth?
**Text: mrs. hilt ran 3 miles on monday 2 miles on wednesday and 7 miles on friday.**
how many total miles did she run that week ?
**Rephrased: Covering 3 miles on Monday, 2 miles on Wednesday, and 7 miles on**
Friday, what was the overall distance Mrs. Hilt ran during the week?
**Text: A petri dish originally contained 600.0 bacteria . A scientist let the bacteria**
grow and now there are 8917.0 of them . How many more bacteria are there now ?
**Rephrased: Starting with 600.0 bacteria, the population in a petri dish increased**
to 8917.0 due to growth. What is the additional count of bacteria?
**Text: Ryan has 72.0 marbles and 17.0 blocks . If he shares the marbles**
among 9.0 friends , how many marbles does each friend get ?
**Rephrased: With a possession of 72.0 marbles and 17.0 blocks, if Ryan**
distributes the marbles equally among 9.0 friends, what is the share per friend?
**Text: A restaurant served 6 cakes during lunch and 9 during dinner today .**
How many cakes were served today ?
-----
| [
"Gulsum, Yigit",
"Mehmet Fatih, Amasyali"
] | 2024-04-05T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2404.03938v1 | https://arxiv.org/abs/2404.03938 | https://www.semanticscholar.org/paper/bdc53b2cd5fcfbcde1715cc84a923cc9a63ad524 |
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning for Problem-Solving Improvement of LLMs | Large Language Models (LLMs) have demonstrated remarkable efficiency in tackling various tasks based on human instructions, but studies reveal that they often struggle with tasks requiring reasoning, such as math or physics. This limitation raises questions about whether LLMs truly comprehend embedded knowledge or merely learn to replicate the token distribution without a true understanding of the content. In this paper, we delve into this problem and aim to enhance the reasoning capabilities of LLMs. First, we investigate if the model has genuine reasoning capabilities by visualizing the text generation process at the attention and representation level. Then, we formulate the reasoning process of LLMs into a causal framework, which provides a formal explanation of the problems observed in the visualization. Finally, building upon this causal framework, we propose Deconfounded Causal Adaptation (DCA), a novel parameter-efficient fine-tuning (PEFT) method to enhance the model's reasoning capabilities by encouraging the model to extract the general problem-solving skills and apply these skills to different questions. Experiments show that our method outperforms the baseline consistently across multiple benchmarks, and with only 1.2M tunable parameters, we achieve better or comparable results to other fine-tuning methods. This demonstrates the effectiveness and efficiency of our method in improving the overall accuracy and reliability of LLMs. | Deconfounded Causal Adaptation (DCA), a novel parameter-efficient fine-tuning (PEFT) method to enhance the model's reasoning capabilities by encouraging the model to extract the general problem-solving skills and apply these skills to different questions. | ## Deconfounded Causality-aware Parameter-Efficient Fine-Tuning for Problem-Solving Improvement of LLMs
Ruoyu Wang[1], Xiaoxuan Li[1], and Lina Yao[1][,][2]
1
University of New South Wales
2
Commonwealth Scientific and Industrial Research Organisation, Australia
**Abstract. Large Language Models (LLMs) have demonstrated remark-**
able efficiency in tackling various tasks based on human instructions,
but recent studies reveal that these models often fail to achieve satisfactory results on questions involving reasoning, such as mathematics or
physics questions. This phenomenon is usually attributed to the uncertainty regarding whether these models could genuinely comprehend the
knowledge embedded in the text or merely learn to replicate the token
distribution without a true understanding of the content. In this paper,
we delve into this problem and aim to enhance the reasoning capabilities
of LLMs. First, we investigate if the model has genuine reasoning capabilities by visualizing the text generation process at the attention and
representation level. Then, we formulate the reasoning process of LLMs
into a causal framework, which provides a formal explanation of the problems we observe in the visualization. Finally, building upon this causal
framework, we propose Deconfounded Causal Adaptation (DCA), a novel
parameter-efficient fine-tuning (PEFT) method to enhance the model’s
reasoning capabilities by encouraging the model to extract the general
problem-solving skills and apply these skills to different questions. Experiments show that our method outperforms the baseline consistently
across multiple benchmarks, and with only 1.2M tunable parameters, we
achieve better or comparable results to other fine-tuning methods. This
demonstrates the effectiveness and efficiency of our method in improving
the overall accuracy and reliability of LLMs.
**Keywords: Parameter-Efficient Fine-Tuning (PEFT) · Causality · Large**
Language Models
**1** **Introduction**
Recent years have witnessed remarkable progress on Large Language Models
(LLMs) [38], especially those instruction-following models such as ChatGPT
and GPT-4 [17]. Numerous studies have demonstrated that these models exhibit
strong capabilities across a wide range of tasks. However, despite the effectiveness
of these models, existing work [11] shows that they perform poorly on Out-ofDistribution tasks, so fine-tuning with specific tasks and datasets is required to
achieve satisfactory results.
-----
2 Wang et al.
Fig. 1: Parameter-Efficient Fine-Tuning (PEFT) methods transform the nonprompt-following model to prompt-following by injecting a small number of
learnable parameters into the pre-trained LLM. Our method lies in the domain
of PEFT and concentrates on its problem-solving capabilities.
Nevertheless, fine-tuning large-scale LLMs in full is often prohibitively costly,
thus many Parameter-Efficient Fine-Tuning (PEFT) methods have been proposed in recent years, which transform a non-prompt-following model into a
prompt-following model by injecting a small number of extra model parameters
(Figure 1), thereby greatly decreasing the computational and storage costs. Recent State-of-the-Art PEFT techniques achieve performance comparable to that
of full fine-tuning [37,16].
While these prompt-following models or fine-tuning methods have been proven
to be effective in generating responses based on human instructions, there remains uncertainty regarding whether these models have genuinely acquired knowledge from the text or merely learned the distribution of the word tokens without
true comprehension. [29] claimed that the scaling up of language models could
significantly enhance their performance, which is usually seen as a piece of evidence that the LLMs can acquire knowledge when it’s sufficiently large. However,
[21] claims that emergent abilities only appear for specific metrics, and [11] suggests that these models do not possess any causal reasoning abilities.
Many discussions have been raised regarding this issue, yet the answer remains inconclusive. Besides, most of these discussions are raised on GPT models, and it is rarely addressed in the context of LLM Fine-tuning. Therefore, we
investigate this issue in the context of LLM Fine-tuning and propose a novel Parameter Efficient Fine-Tuning (PEFT) method based on Causal Inference techniques to improve the reasoning capabilities of the models. In particular, we first
investigate if the model has genuine reasoning capabilities by visualizing the
reasoning process at the attention and representation level. Then, we formulate
the reasoning process of LLMs into a causal framework, which provides a formal
explanation of the problems we observe in the visualization. Finally, we propose
Deconfounded Causal Adaptation (DCA), a novel fine-tuning method to improve
the model’s reasoning capability, and experimentally show the effectiveness and
efficiency of our method. The contribution of our paper is three-fold:
-----
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning 3
**– We investigate the text generation process of an instruction-following model**
by visualization in the level of attention and representation, and present
empirical evidence that the model lacks genuine causal reasoning capabilities;
**– We formulate the reasoning process of LLMs in a causal framework, formally**
explaining the reasons for the observed failure cases in the visualization;
**– We propose Deconfounded Causal Adaptation (DCA), a novel fine-tuning**
method to improve the reasoning capability of LLMs, and experimentally
demonstrate the effectiveness of our method, which achieves strong performance with only 1.2 Million tunable parameters.
**2** **Preliminary**
**2.1** **LLAMA-Adapter**
LLaMA-Adapter [37] is a lightweight adaption method to fine-tune LLaMA into
an instruction-following model, which has demonstrated the capability to generate high-quality responses. We conducted our study and built our method based
on LLaMA-Adapter due to its effectiveness and efficiency.
The architecture of LLaMA-Adapter is illustrated in Figure 2a. For each of
the topmost L Transformer layers of LLaMA, an adaption prompt Tl ∈ R[M] _[×][C]_
is concatenated to the original prompt Pl ∈ R[K][×][C] along the token dimension:
[Pl; Tl] ∈ R[(][K][+][M] [)][×][C] (1)
where M denotes the length of the adapter to be concatenated, K denotes the
original prompt length for each transformer layer, and C denotes the feature
dimension of LLaMA’s transformer. This concatenation operation is applied to
the corresponding dimension in Key and Value in the self-attention mechanism.
Further, a zero-init attention mechanism with zero gating is proposed to
improve the training by injecting the new instructional cues into LLaMA. While
calculating the attention score, the softmax function is applied independently to
the two components in Equation 1, and multiplies the concatenated term by a
gating factor gl, as illustrated in Equation 2 and Figure 2a.
_T_
_Sl[g]_ [=] _Softmax(Sl[K][);][ Softmax][(][S]l[M]_ [)][ ·][ g][l] (2)
We highlighted part of the architecture of the LLaMA-Adapter that is closely
related to our method. We direct interested readers to [37] for comprehensive
details of this method.
**2.2** **Causal Inference**
In the domain of Causality [18], causal relationships are usually denoted by
Directed Acyclic Graph (DAG). For example, in Figure 2b, X → Z denotes
that X is a direct cause of Z. There are three basic building blocks in a causal
graph: Chain, Fork, and Collider. Chain is the case where one element causally
-----
Wang et al.
(b)
(c)
(a)
Fig. 2: (a) The architecture of LLaMA-Adapter. A trainable lightweight adapter
is inserted into each of the topmost L layers out of the N transformer layers
of LLaMA. Aided by zero-init attention and gating mechanisms, the adaption
prompt progressively learns new instructional cues, without disturbing the original pre-trained knowledge; (b) X → Z → Y is a chain, X ← C → Y is a fork,
C → Y ← Z is a collider; (c) We perform intervention do(X) to cut the edge
_C →_ _X so that the causal effect P_ (Y |do(X)) can be estimated.
influences another, then leading to the causal impact on a third element, such as
X → Z → Y in Figure 2b. Fork is the case where one element causally influences
two other elements, such as X ← C → Y in Figure 2b. Collider is the case where
two elements causally influence a third element such as C → Y ← Z in Figure 2b.
**Confounder If a variable is the common cause of two other variables, it**
is called a confounder. Confounders will induce spurious correlations between
the two variables, thus disturbing the recognition of the causal effect between
them. For example, in Figure 2b, C is a confounder between X and Y. The
association between X and Y include the spurious correlations created by the
confounder C (X ← _C →_ _Y ) which is non-causal, and the goal of causal inference_
is to deconfound the spurious correlations so that the true causal relationships
between X and Y (X → _Z →_ _Y ) can be measured._
**Intervention In order to measure the causal effect between X and Y, we**
need to avoid the association flow through the fork X ← C → Y by blocking
the path C → X. To this end, we force the variable X = x regardless the value
of C. In that case, C no longer affects the value of X and thus path C → X is
blocked. This process is called intervention in causal inference and is denoted
as do(X=x) (Figure 2c). In contrast to P (Y |X), which comprises both causal
association and spurious correlations caused by confounder, P (Y |do(X)) allows
us to measure the genuine causal effect between X and Y.
-----
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning 5
**3** **Our Method**
**3.1** **Investigation and Motivation**
As discussed in Section 1, we aim to investigate if the prompt-following models
have genuine causal reasoning capabilities. To this end, we conduct the following
experiments. Since models such as ChatGPT and GPT-4 are not available in
open-source form, we conduct our study using LLaMA-Adapter [37] to gain
access to attention values and representations at each layer.
First, we fine-tune the LLaMA 7B model with LLaMA-Adapter using the
_Letter Concatenation dataset, which will be introduced in Section 4.2. Then, we_
test the model with two prompts below. The only difference between these two
prompts lies in the string within the quotation marks, and as a result, the model
answered Prompt A correctly, but failed on Prompt B.
**Prompt A: Take the second last letters of the words in “GALLEGOS**
_MORAN” and concatenate them;_
**Prompt B: Take the second last letters of the words in “DAVENPORT**
_MAGANA” and concatenate them._
To explore the cause of the model’s failure on Prompt B, we visualize the
attention values in the text generation process by adapting BertViz [28], and
conduct a thorough comparison between the two test cases on the attention heat
map of each attention head across all transformer layers. Consequently, we found
that the model’s failure on Prompt B can be attributed to the malfunctioning
of some particular adapter structures.
Figure 3a-3b provides an example of such malfunctioning structures, where
we present the attention values of the sixth element in the adapter (adap_6)
located in the 32nd attention head of the last transformer layer of LLaMAAdapter. We observed that when the model correctly predicts the answer (Figure 3a), adap_6 tends to focus on the question rather than the value of the
string. However, in Figure 3b, where the model failed to provide the correct
answer, it exhibits a focus on a portion of the string, such as token “AG” and
“AN” as highlighted. Similar patterns can also be observed in many other cases.
Therefore, we empirically conclude that such malfunctioning units are the root
cause of the mistake the model made on Prompt B.
In other words, simply replacing the string within the quotation marks significantly affects the thinking process of the model. This behaviour starkly contrasts
with how humans solve such questions. From a human perspective, Prompt A
and Prompt B are nearly identical, if we understand how to solve one of these
problems, we inherently possess the method to solve all similar questions. This
is because humans understand the world through causal relationships, enabling
us to recognize and comprehend the underlying rationales. In contrast, LLMs
were constructed based on statistical associations, leading to a deficiency in their
capacity to comprehend the question and to do causal reasoning.
-----
Wang et al.
(a) (b)
(c)
(d)
Fig. 3: (a)-(b) Changing the value of the string affects the functioning of the
attention mechanism as highlighted. (c) Causal graph of the reasoning process;
(d) We block the backdoor path by performing an intervention on XG.
Hence, our empirical finding suggests a deficiency in the model’s comprehension of the task, as mere string value changes influence the attention mechanism’s
behaviour. These observations motivate us to enhance the reasoning abilities of
these models. Therefore, we introduce our method to improve response quality by fostering the model’s capability of causal reasoning. Following this idea,
we first formulate the reasoning process of LLMs into a causal framework in
Section 3.2, and then propose our causal Fine-tuning method in Section 3.3.
**3.2** **Method Specification**
We formulate the reasoning process of LLMs into a causal framework, as illustrated in Figure 3c. In this framework, X denotes the encoded feature of the
prompt, K denotes the relevant knowledge to solve the problem provided by the
LLM, and Y denotes the LLM’s response to the query.
**LLM →** **X When a prompt is presented to the LLM, it encodes the prompt**
into feature X. Therefore, LLM is the direct cause of X.
**LLM →** **K ←** **X Once the prompt is encoded, the LLM offers the relevant**
knowledge K required to solve the problem in X. Therefore, both the LLM and
X are direct causes of K.
**K →** **Y ←** **X The knowledge K encompasses the method on how to solve**
the problem described in X, while X contains the question-specific information,
such as the values involved in the problem. So both X and K are a cause of Y.
-----
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning 7
As demonstrated in Section 3.1, the prompt feature X comprises two independent semantics, one encompasses general problem-solving information, and the
other one contains problem-specific information. Taking this into consideration,
we introduce two additional elements to the graph, namely, the general problemsolving information XG, and the problem-specific information XS. Both elements
are derived from X, XG serves as a cause of the problem-solving knowledge K,
and XS acts as a mediator between X and Y.
In this framework, XG and XS should be strictly independent because it’s
common sense that the problem does not affect the problem-solving skill set. For
instance, in the letter concatenation problems, the value of the string within the
quotation marks should be independent of the method we use to locate, fetch
and concatenate the desired characters.
However, based on the causal inference theory introduced in Section 2.2, the
independence between XG and XS is not guaranteed. Although there are no
direct causal relationships between the two elements, X acts as a confounder
between XG and XS and thus creates spurious associations between them. This
explains the phenomenon we observed in Figure 3a-3b, where altering the value
of XS (the string within the quotation marks) affects the reasoning process XG
(the functionality of adap_6).
Therefore, to deconfound the spurious association between XG and XS, we
perform an intervention on XG to block the association from flowing through
the path XG _X_ _XS, as demonstrated in Figure 3d. In that case, changing_
_←_ _→_
_XS will no longer affect the reasoning process of XG._
**3.3** **Implementation of Causal Intervention**
In this section, we introduce our method to implement the intervention on XG,
as illustrated in Figure 3d. First, we assume that the general problem-solving
information XG and the problem-specific information XS can be identified by
comparison across samples in a dataset, i.e., the differences between data samples
are problem-specific, and thus belong to XS, and the general problem-solving
knowledge, denoted as XG, is common across all samples. For instance, in the
example given in Section 3.1, XG contains the method of fetching the desired
characters and performing concatenation, and XS contains the order of the characters to be fetched and from which string are these characters to be selected.
With this assumption, performing the intervention do(XG) is equivalent to
holding XG invariant across all data samples so that it can maintain the general problem-solving information consistently while changing XS. For example,
we aim to hold adap_6 invariant across Figure 3a and Figure 3b, to avoid it
possessing information of XS, such as the token “AG” and “AN” in Figure 3b.
Thus, we introduce a causal constraint into the training process to encourage
_XG to remain invariant across all data samples. Mathematically, we penalize a_
larger value of variance on XG by introducing a regularization term in Equation 4
min _CE + α_ _causal_ (3)
_θ_ _L_ _L_
-----
8 Wang et al.
Fig. 4: The framework of our method. First, we divide the concatenated Adapter
prompt in LLaMA-Adapter into two segments, Adap1 and Adap2. This affects
the dimensions of K and V in the self-attention mechanism, as denoted on the
right-hand side. The process of generating the feature Z remains unchanged, and
we build our causal loss Lcausal by manipulating Adap1.
_Lcausal = El∈L′ [V ar(XG)]_ (4)
where LCE is the Cross-Entropy Loss used to train the token prediction accuracy,
and α is the weight of our causal regularization term. We apply this causal
regularizer on the topmost L[′] transformer layers, so we take expectation over
these layers, where L[′] _≤_ _L is a tunable hyper-parameter._
In order to estimate XG in each of the topmost L[′] layers in Equation 4, we
divide the concatenated adapter Tl into two separate pieces, Tl,1 with the length
_H, and Tl,2 with length M −_ _H. Therefore, we rewrite Equation 1 as:_
[Pl; Tl,1; Tl,2] ∈ R[(][K][+][H][+(][M] _[−][H][))][×][C]_ (5)
Similar to the vanilla LLaMA-Adapter, this affects the dimension setting of
the Key and Value in the self-attention module. Therefore, we rewrite these two
modules as Equation 6 and Equation 7.
_Kl = [Kvanilla; Kadap1; Kadap2]_ (6)
_Vl = [Vvanilla; Vadap1; Vadap2]_ (7)
Then, instead of applying the softmax function on the three components independently, we first apply the softmax function on the two original components
and multiply with the gating module introduced in the vanilla LLaMA-Adapter,
then separate the score matrices into three pieces. Therefore, we have Equation 8.
_Sl[g]_ [= [][S][vanilla][;][ S][adap][1][;][ S][adap][2][]] (8)
-----
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning 9
These operations divide the adapter architecture into two segments. Then
we treat these two segments as XG and XS respectively, enabling us to impose
distinct constraints on each of them. In particular, we treat Tl,1 with length H as
the section controlling the general problem-solving information XG. Therefore,
_XG can be estimated by Equation 9._
_XG_ _Sadap1_ _Vadap1_ (9)
_≈_ _·_
Finally, we aggregate this quantity in each of the topmost L[′] layers and take
expectation to form the causal regularizer as introduced in Equation 4. The
architecture of our method is illustrated in Figure 4. The modules involved in
the calculation of Lcausal are coloured in dark red.
**4** **Experiment**
**4.1** **Experimental Settings**
We build our method by fine-tuning LLaMA 7B model [26], thus all the parameters related to dimensions and layers remain unchanged, such as the number
of transformer layers is 32, and each transformer layer has 32 attention heads.
Also, the feature dimension is 128 for each attention head, thus the total feature
dimension is 4096. We train the model with a maximum sequence length of 256,
and use AdamW for optimization with a learning rate equal to 1e-3. All the
models are fine-tuned for 5 epochs with a batch size of 4 for a fair comparison.
In terms of the parameters introduced by vanilla LLaMA-Adapter, we set
_L = 20 and M = 10, which means we fine-tune the top 20 transformer layers by_
appending an adapter prompt of length 10 on each of them. For the parameters
_H and α introduced by our method, we set H as 2 and α as 1 in all experiments._
The parameter L[′] is data-dependent, and we use 20 for Letter Concatenation,
10 for Date Understanding, 3 for AddSub and Math10k, and 1 for Math401. All
other settings, if not specified here, remain the same as in [37].
**4.2** **Tasks for Evaluation**
We evaluate the performance of our method by three types of reasoning tasks:
**Symbolic Reasoning We construct a more challenging version of the last**
letter concatenation problem in [30] because the models could almost perfectly
solve the problems if the models are fine-tuned with it. Therefore, we ask the
model to perform second last letter concatenation, such as Take the second last
_letters of the words in “Lady Gaga" and concatenate them._
**Commonsense Reasoning We test the models with Date Understanding**
data [23], where each data sample asks a multiple-choice question such as If
_today is Jan 1, 2023, what day is tomorrow in MM/DD/YYYY?_
**Arithmetic Reasoning We test the models on three datasets, Math401**
[34], which comprises basic arithmetic questions such as 1+2=?, AddSub [6]
and Math10k [9], both comprises math word questions such as Tom found 7
_seashells but 4 were broken . How many unbroken seashells did Tom find?._
-----
10 Wang et al.
Table 1: Accuracies of models based on LLaMA-7B. Our method achieves better
or comparable results to other methods with only 1.2M tunable parameters.
|Col1|Params|LConcat Date Math401 AddSub Math10k|Avg.|
|---|---|---|---|
|Alpaca-7B [25] Vicuna-7B [2] Koala-7B [4] Baize-7B [31] LLaMA2-7B ct. [27] Mistral-7B-ins. [10]|- - - - - -|0.0 52.2 9.8 22.3 10.2 0.0 29.4 29.2 38.6 15.3 0.0 54.3 25.6 32.4 12.7 0.0 44.9 28.3 34.4 11.2 0.0 56.8 30.6 56.7 21.6 0.0 54.6 28.6 39.6 13.2|18.9 28.5 25.0 23.8 33.1 27.2|
|---|---|---|---|
|S-Adapterh [7] S-Adapterp [19] P-Adapter [5] LoRA [8] AdaLoRA [36] Prefix-Tune [14] Prompt-Tune [13] KronA [3] LoftQ [15]|134M 68M 200M 4.2M 3.8M 7.0M 2.0M 4.2M 4.0M|80.1 79.8 20.2 78.1 29.9 77.3 79.3 21.5 82.1 24.1 80.4 82.2 22.1 84.7 29.5 80.8 82.6 23.6 83.3 30.8 80.9 83.0 23.4 85.4 30.9 80.2 78.3 25.2 57.0 34.9 78.3 82.6 21.2 62.3 24.7 81.7 82.8 23.4 83.0 31.4 80.9 82.7 23.0 83.7 29.8|57.6 56.9 59.8 60.2 60.7 55.3 53.8 60.5 60.0|
|---|---|---|---|
|LLaMA-Adap. DCA (Ours)|1.2M 1.2M|75.3 78.3 21.6 83.6 30.2 82.1(+6.8) 84.7(+6.4) 24.6(+3.0) 86.3(+2.7) 35.3(+5.1)|57.8 62.6|
|---|---|---|---|
**4.3** **Baselines and Comparison Methods**
We compare our method with other methods from three perspectives to conduct
a comprehensive comparison:
1) We compare our method with the vanilla LLaMA-Adapter [37]. Since
we build our method based on LLaMA-Adapter, this comparison allows us to
understand the direct impact of implementing our method. All common settings
between the two methods such as parameters are kept the same to ensure a fair
comparison. The results of this comparison is presented in the bottom block
of Table 1, and we highlight the margin achieved by our method in green.
2) We compare our method with the other parameter-efficient fine-tuning
(PEFT) methods, as listed in the middle block of Table 1. We apply these
methods on LLaMA 7B, and the results are obtained with the library and
hyper-parameters provided by [16,9]. We present the results and the number
of learnable parameters allowing us to compare our method with the baseline
methods in terms of both effectiveness and efficiency.
3) We compare our method with several pre-trained prompt-following models
with the size of 7B, as listed in the top block of Table 1. These models do
not lie in the domain of PEFT and thus are not directly comparable to our
method. They are either obtained by full fine-tuning or pre-trained with massive
conversational data. We compare our method with these models to investigate
their performances on the reasoning tasks and evaluate if task-specific fine-tuning
is necessary to achieve satisfactory results.
-----
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning 11
**4.4** **Overall Results**
The results are presented in Table 1, where the numbers denote the accuracies
the methods achieve on each dataset. While comparing our method with the
three types of baselines outlined above, our findings also fall into three aspects:
1) Compared with LLaMA-Adapter: Our method consistently outperforms LLaMA-Adapter by a considerable margin on all datasets, as highlighted
in green in Table 1. Since all the common settings of the two methods remain
the same, the results directly demonstrate the impact of our causal method.
2) Compared with the other PEFT methods: We found that while
the vanilla LLaMA-Adapter does not always outperform the baseline methods,
our method, in contrast, achieves either the highest or the second highest score
across all datasets. Even though a few methods may perform better than our
method on some particular datasets, it is worth noting that our method has only
1.2M learnable parameters, which is the least among all methods. In summary,
our method achieves better or comparable results with other PEFT methods,
with much less learnable parameters.
3) Compared with pre-trained models: We found that the performance
of pre-trained models is generally not satisfactory compared with the PEFT
methods. While these models achieve fair performances on some datasets, they
face significant challenges in the LConcat task. Notably, it was observed that
none of the pre-trained models under consideration could accurately respond to
the Letter Concatenation questions. To ensure this phenomenon is not due to the
bias in our prompt, we endeavoured to rephrase the questions in LConcat, however, the models consistently exhibited an inability to comprehend the prompts
and frequently provided irrelevant or meaningless responses. We speculate that
this is due to the insufficient inclusion of training data of this specific nature
during the model’s fine-tuning phases.
**Summary Our experiments suggest that fine-tuning on specific tasks is**
necessary to achieve satisfactory results. And, among the Parameter-Efficient
Fine-Tuning methods, our method achieves better or comparable results with
**much less learnable parameters and computational resources.**
**4.5** **Effects of New Parameters**
To further investigate the mechanism of our method, we study the impact of
parameters introduced by our method, namely, the length H of adaption prompts
to be treated as XG, the weight α of the regularization term _causal, and the_
_L_
number of layers L[′] to be used to calculate _causal._
_L_
**Choice of H and α We visualize the effect of H and α on the Letter**
Concatenation dataset in Figure 5a - 5b, where the x-axis denotes the value
of the parameters, and the y-axis denotes the accuracy obtained by the model.
Similar trends can be observed in both charts that increasing the value of H
and α can improve the performance of the model, but excessive values can be
detrimental. This aligns with our intuition. For H, if a substantial fraction of the
adapter remains fixed as XG, then only a limited part of the adapter could be
-----
12 Wang et al.
(a) Effect of H (b) Effect of α
Fig. 5: Effect of H and α on LConcat. The red dot line denotes baseline accuracy.
The value of these parameters should be chosen carefully, otherwise may harm
the performance when the values are too large.
left to address XS, which compromises its efficacy in managing problem-specific
information. For α, if a large weight is employed for _causal, the module to_
_L_
handle XG might remain constant and cannot encode any information.
**Choice of L[′]** We found the optimal choice of L[′] is data-dependent. On
datasets like Letter Concatenation, where all the prompts follow the same format, a larger L[′] is beneficial to the performance. In contrast, on datasets like
AddSub, where the questions are not necessarily in the same template, a smaller
_L[′]_ is preferable. This is intuitively reasonable, because for those datasets where
the prompts are close enough in the first place, encouraging the model to extract
_XG from the bottom layers grants us more control over the reasoning process. In_
contrast, for those datasets where the prompts are not sufficiently close, XG can
only be extracted and controlled when the representations have been aggregated
to a certain level. In that case, a large L[′] would limit the model’s potential for
aggregating the high-level information.
**4.6** **Further Discussions**
**Applicable scenarios We illustrate the motivation and idea of our method in**
Section 3.1. However, it is worth noting that our method is not limited to the case
of the same pattern questions. Instead, prompts in different formats also benefit
from our method. As demonstrated in Section 4, our method benefits a wide
range of reasoning tasks with various datasets. This is because we encourage the
model to extract the “approach” of solving problems. In other words, as long as a
prompt involves reasoning, there will be some problem-solving skills (XG), and
our method is applicable to the scenario. For example, in date understanding
and math word questions, where the prompts vary significantly, our method still
benefits the performance as illustrated in Table 1, because we encourage the
model to extract the high-level knowledge, such as the meaning of “tomorrow”,
“end of the month” or the math operations such as “Add”, “Subtract”, and keep
these problem-solving skills invariance across all data samples. In contrast, our
-----
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning 13
method does not apply to the general Q&A questions, such as Tell me about
_Alpaca, because these questions do not require reasoning capabilities and there_
is no “approach” to answer these questions.
**Few-shot experiments Few-shot prompt method such as Chain-of-Thought**
(COT) [30] is known to be useful on large models like ChatGPT/GPT4, but it
does not apply to PEFT methods, so we did not include these experiments in
our paper. To elaborate, COT works well on ChatGPT/GPT4 because those
models are fine-tuned by a massive amount of prompt-answer pairs with oneshot examples, enabling the model to utilize one-shot information effectively. In
contrast, our method fine-tunes a non-prompt-following LLMs (LLaMA) with
task-specific data aiming for improved performance on the task. Since the data
does not contain any one-shot prompts, the model will not be able to utilize the
one-shot information. As a matter of fact, our experiments reveal that COT is
even harmful to the result in such cases.
**Finetuning a prompt-following model We also conduct experiments to**
apply our method on prompt-following models such as Alpaca. As a result,
it achieves an accuracy of 75.3 on LConcat, and 79.8 on Date Understanding
datasets, which is not comparable to the result we achieved using the original
non-prompt following LLaMA. We speculate this is because such instructiontuned LLMs (such as Alpaca/Vicuna) are also based on the original foundation
model such as LLaMA, and it has been fine-tuned with the data that are not
closely related to our downstream tasks, thus dropping some information relevant
to our task, thus harming the performance. Therefore, we empirically conclude
that it would be a better practice to fine-tune the foundation model, rather than
an existing instruction-following model.
**5** **Related Works**
**Reasoning in LLMs. Instruction-following LLMs have been employed on many**
tasks involving reasoning recently, including but not limited to Mathematics,
Logical Reasoning, and Symbolic Reasoning [20,30,38]. Many of these methods
investigate LLM’s reasoning capabilities from its output using Chain-of-Thought
prompting strategy [30]. Apart from these, some works build thinking pipelines
[1,33] to achieve the final goal step-by-step.
**Causal Inference in Machine Learning. Causal inference has been ap-**
plied to many vision tasks in recent years such as image recognition [35,24] and
Image Generation [12]. These works first construct causal graphs to explain the
task, then use causal inference methods to eliminate the spurious association and
improve the performance of the models. Besides, causal inference techniques are
also used in Representation Learning [22,32].
**5.1** **Relationships with our method**
Existing works typically discuss LLMs’ reasoning abilities based on their input
and output [20]. However, we argue that solving causality-related tasks or providing the thinking processes by words do not necessarily indicate the model’s
-----
14 Wang et al.
reasoning capability, because simply mimicking token distribution could achieve
equivalent outcomes. Our work, in contrast, discusses the reasoning capabilities
of LLMs in the level of attention and representation, thus offering a novel perspective on this matter. Besides, the novelty of our method also involves applying
causality in LLM fine-tuning, which was rarely discussed in earlier literature.
**6** **Conclusion**
In this paper, we first investigated the reasoning capabilities of the promptfollowing LLMs by visualizing the attention values in the thinking process, and
empirically suggest that these models lack genuine causal reasoning capabilities. Then, we formulate the reasoning process of LLMs into a causal inference
framework to explain the issues observed in the visualization. Finally, we propose Deconfounded Causal Adaptation (DCA), a causal fine-tuning method to
improve the model’s reasoning capability. Experiments show our method effectively enhances the reasoning capabilities of the models and outperforms baseline methods consistently. Besides, we also discuss the applicable scenarios of our
method and analyze the effect of our method with different settings thoroughly.
**References**
1. Besta, M., Blach, N., Kubicek, A., et al.: Graph of thoughts: Solving elaborate
problems with large language models. arXiv preprint arXiv:2308.09687 (2023)
2. Chiang, W.L., Li, Z., Lin, Z., Sheng, Y., et al.: Vicuna: An open-source chatbot
impressing gpt-4 with 90%* chatgpt quality. https://vicuna. lmsys. org (2023)
3. Edalati, A., Tahaei, M., Kobyzev, I., Nia, V.P., et al.: Krona: Parameter efficient
tuning with kronecker adapter. arXiv preprint arXiv:2212.10650 (2022)
4. Geng, X., Gudibande, A., Liu, H., Wallace, E., Abbeel, P., Levine, S., Song, D.:
Koala: A dialogue model for academic research. Blog post, April 1 (2023)
5. He, J., Zhou, C., Ma, X., Berg-Kirkpatrick, T., Neubig, G.: Towards a unified view
of parameter-efficient transfer learning. arXiv preprint arXiv:2110.04366 (2021)
6. Hosseini, M.J., Hajishirzi, H., Etzioni, O., Kushman, N.: Learning to solve arithmetic word problems with verb categorization. In: EMNLP. pp. 523–533 (2014)
7. Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., et al.: Parameter-efficient
transfer learning for nlp. In: ICML. pp. 2790–2799. PMLR (2019)
8. Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., et al.: Lora: Low-rank
adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021)
9. Hu, Z., Lan, Y., et al.: Llm-adapters: An adapter family for parameter-efficient
fine-tuning of large language models. arXiv preprint arXiv:2304.01933 (2023)
10. Jiang, A.Q., Sablayrolles, A., Mensch, A., Bamford, C., Chaplot, D.S., Casas,
D.d.l., Bressand, F., et al.: Mistral 7b. arXiv preprint arXiv:2310.06825 (2023)
11. Jin, Z., Liu, J., Lyu, Z., Poff, Spencer Schölkopf, B., et al.: Can large language
models infer causation from correlation? arXiv preprint arXiv:2306.05836 (2023)
12. Kocaoglu, M., Snyder, C., et al.: Causalgan: Learning causal implicit generative
models with adversarial training. arXiv preprint arXiv:1709.02023 (2017)
13. Lester, B., Al-Rfou, R., Constant, N.: The power of scale for parameter-efficient
prompt tuning. arXiv preprint arXiv:2104.08691 (2021)
-----
Deconfounded Causality-aware Parameter-Efficient Fine-Tuning 15
14. Li, X.L., Liang, P.: Prefix-tuning: Optimizing continuous prompts for generation.
arXiv preprint arXiv:2101.00190 (2021)
15. Li, Y., Yu, Y., Liang, C., He, P., et al.: Loftq: Lora-fine-tuning-aware quantization
for large language models. arXiv preprint arXiv:2310.08659 (2023)
16. Mangrulkar, S., Gugger, S., Debut, L., et al.: Peft: State-of-the-art parameter[efficient fine-tuning methods. https://github.com/huggingface/peft (2022)](https://github.com/huggingface/peft)
17. OpenAI: Gpt-4 technical report (2023)
18. Pearl, J.: Causality. Cambridge university press (2009)
19. Pfeiffer, J., Vulić, I., Gurevych, I., Ruder, S.: Mad-x: An adapter-based framework
for multi-task cross-lingual transfer. arXiv preprint arXiv:2005.00052 (2020)
20. Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., et al.: Reasoning with
language model prompting: A survey. arXiv preprint arXiv:2212.09597 (2022)
21. Schaeffer, R., Miranda, B., Koyejo, S.: Are emergent abilities of large language
models a mirage? arXiv preprint arXiv:2304.15004 (2023)
22. Shen, X., Liu, F., Dong, H., Lian, Q., et al.: Weakly supervised disentangled generative causal representation learning. JMLR 23(1), 10994–11048 (2022)
23. Srivastava, A., Rastogi, A., Rao, A., Shoeb, A.A.M., Abid, A., Fisch, A., Brown,
A.R., Santoro, A., et al.: Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615 (2022)
24. Tang, K., Huang, J., Zhang, H.: Long-tailed classification by keeping the good and
removing the bad momentum causal effect. NeurIPS 33, 1513–1524 (2020)
25. Taori, R., Gulrajani, I., Zhang, T., Dubois, Y., Li, X., Guestrin, C., Liang, P.,
Hashimoto, T.B.: Stanford alpaca: An instruction-following llama model (2023)
26. Touvron, H., Lavril, T., Izacard, G., Martinet, X., et al.: Llama: Open and efficient
foundation language models. arXiv preprint arXiv:2302.13971 (2023)
27. Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al.: Llama 2: Open foundation
and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023)
28. Vig, J.: Bertviz: A tool for visualizing multihead self-attention in the bert model.
In: ICLR workshop: Debugging machine learning models. vol. 23 (2019)
29. Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Yogatama,
D., Bosma, M., Zhou, D., Metzler, D., et al.: Emergent abilities of large language
models. arXiv preprint arXiv:2206.07682 (2022)
30. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, Q.V., Zhou,
D., et al.: Chain-of-thought prompting elicits reasoning in large language models.
Advances in Neural Information Processing Systems 35, 24824–24837 (2022)
31. Xu, C., Guo, D., et al.: Baize: An open-source chat model with parameter-efficient
tuning on self-chat data. arXiv preprint arXiv:2304.01196 (2023)
32. Yang, M., Liu, F., Chen, Z., Shen, X., et al.: Causalvae: Disentangled representation
learning via neural structural causal models. In: CVPR. pp. 9593–9602 (2021)
33. Yao, S., Yu, D., Zhao, J., Shafran, I., et al.: Tree of thoughts: Deliberate problem
solving with large language models. arXiv preprint arXiv:2305.10601 (2023)
34. Yuan, Z., Yuan, H., Tan, C., Wang, W., Huang, S.: How well do large language
models perform in arithmetic tasks? (2023)
35. Yue, Z., Zhang, H., Sun, Q., Hua, X.S.: Interventional few-shot learning. Advances
in neural information processing systems 33, 2734–2746 (2020)
36. Zhang, Q., Chen, M., Bukharin, A., He, P., et al.: Adaptive budget allocation for
parameter-efficient fine-tuning. arXiv preprint arXiv:2303.10512 (2023)
37. Zhang, R., Han, J., Zhou, A., Hu, X., et al.: Llama-adapter: Efficient fine-tuning of
language models with zero-init attention. arXiv preprint arXiv:2303.16199 (2023)
-----
16 Wang et al.
38. Zhao, W.X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B.,
et al.: A survey of large language models. arXiv preprint arXiv:2303.18223 (2023)
-----
| [
"Ruoyu, Wang",
"Xiaoxuan, Li",
"Lina, Yao"
] | 2024-09-04T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2409.02686v1 | https://arxiv.org/abs/2409.02686 | https://www.semanticscholar.org/paper/d4f746b14fdbb50e5fe21c8e33cbd68e06fefd2d |
Deliberate Reasoning for LLMs as Structure-aware Planning with Accurate World Model | Enhancing the reasoning capabilities of large language models (LLMs) remains a key challenge, especially for tasks that require complex, multi-step decision-making. Humans excel at these tasks by leveraging deliberate planning with an internal world model to simulate the potential outcomes of various actions. Inspired by this, we propose a novel multi-step reasoning framework for LLMs, referred to as Structure-aware Planning with Accurate World Model (SWAP). Unlike previous approaches that rely solely on Chain-of-Thought (CoT) reasoning in natural language, SWAP incorporates structural information to guide the reasoning process via a world model and provides a soft verification mechanism over the steps. Moreover, SWAP overcomes the challenge of accurate world state predictions in complex reasoning tasks by introducing a Generator-Discriminator architecture, which enables more reliable world modeling. Specifically, the generator predicts the next state, and the discriminator ensures alignment with the logical consistency required by the problem context. SWAP also encourages the policy model to explore a broad range of potential actions to prevent premature convergence. By resolving the bottlenecks of generation diversity for both actions and states using diversity-based modeling (DBM) and improving discrimination accuracy through contrastive ranking (CR), SWAP significantly enhances the reasoning performance of LLMs. We evaluate SWAP across diverse reasoning-intensive benchmarks including math reasoning, logical reasoning, and coding tasks. Extensive experiments demonstrate that SWAP achieves substantial improvements over the baselines and consistently outperforms existing LLMs of similar sizes. | This work proposes a novel multi-step reasoning framework for LLMs, referred to as Structure-aware Planning with Accurate World Model (SWAP), which incorporates structural information to guide the reasoning process via a world model and provides a soft verification mechanism over the steps. | [
"Yuan, Yang",
"Siheng, Xiong",
"Ali, Payani",
"Faramarz, Fekri"
] | 2024-10-04T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2410.03136v1 | https://arxiv.org/abs/2410.03136 | https://www.semanticscholar.org/paper/18f6a10b821bf946b64940e5dd6fa34784efd1e6 |
|
Descending to Complementarity | N/A | null | # Descending to Complementarity[∗]
Martin Suda
Czech Technical University in Prague, Czech Republic
**Motivation**
A proving strategy is a specific configuration of an automatic theorem prover (ATP) used to
attack a problem. It is broadly recognized that there is no universally good strategy and that,
instead, different strategies work well on different problems. At the same time, the task of
predicting in advance, given a particular problem, which strategy will perform best on it, seems
very hard. For these reasons, ATPs typically run—in their most powerful mode of operation—a
whole sequence of time-limited strategies called a schedule, where the individual strategies are
selected to complement each other. This idea of time slicing has been first implemented in the
ATP Gandalf [10, 11] and has been since successfully adopted by most modern ATPs (cf. the
auto-schedule mode in E [6] or the CASC mode in Vampire [3]). A plausible explanation for
why strategy scheduling is so effective in practice, is the observation that if a problem can be
solved at all, it can often be solved by some strategy very quickly [5].
In recent years, we have seen several promising approaches improving the performance of
ATPs with the help of machine learning, in particular artificial neural networks. Most notable
are systems targeting for the improvement clause selection, a crucial heuristic choice point in
a saturation-based ATP. Examples of such systems include ENIGMA-anonymous [1], where
clause selection is guided with the help of a graph neural network [4], and Deepire [7], which
enhances Vampire’s clause selection capabilities by an advice from a recursive neural network
constructed over clause derivation history. While the improvement by these ML-integrations
is very encouraging if not downright impressive, their common trait is that they only strive to
improve a single base prover strategy, thus ignoring the power of strategy scheduling.
The aim of this work is to enhance systems such as ENIGMA or Deepire with the ability
to simultaneously train a full host of guiding heuristics, i.e., strategies, which automatically
adapt to the training problems and become complementary in aspects relevant to solving them.
In the method we propose, the individual strategies arise as local specialists on automatically
clustered problems that are most efficiently solved by the respective variant of the learned
heuristic (such as clause selection). These specialists are determined as small “tweaks” of the
universally optimal, generalist strategy that would be obtained normally.
**Method**
The main idea is to assign a small learnable embedding vector vp to each input problem p and
to condition the network being trained N by vp when learning from examples or experiences
coming from p. The conditioning could mean providing vp as a separate input, but other
schemes seem possible and will be experimented with as part of this work. During training
with gradient descent the embedding vp will “travel” from its initial value (e.g., vp = _[⃗]0 for_
every p) to encode a (mild) specialization—for its corresponding problem p—of the overall
general guiding heuristic. At the same time, the training will thus implicitly cluster the input
problems according to which variation of the general heuristic is most suitable for solving them.
∗Supported by the project RICAIP no. 857306 under the EU-H2020 programme and the Czech Science
Foundation project 20-06390Y.
-----
In more detail, let us imagine we are training a network N from experiences collected
while running our prover over a set of problems P . Our training examples will be of the form
(input, target, p) where p ∈ _P is a problem on which the prover was run to obtain the example._
In standard training with gradient descent, our network would be determined by a vector of
parameters θ and we would, for each example, compute the gradient
_θLoss(Nθ(input), target)_
_∇_
to update these parameters, where Nθ(input) is the network’s prediction for input (which
would—during inference time—be used for the prover guidance).
With conditioning, we additionally maintain a set of problem embeddings _vp_ _p_ _P to be_
_{_ _}_ _∈_
treated as additional trainable network parameters. Our gradient formula becomes
_θ,vp_ _Loss(Nθ,vp_ (input), target)
_∇_
when training on an example obtained while running the prover on p.
The additional degrees of freedom should allow the network to achieve an overall smaller loss.
Thus we expect Nθ,vp (i.e., the specialist) to be more effective at solving problem p (and similar
ones) than the plain Nθ (i.e., the generalist). However, it is important that the dimension of
the embedding _vp_ is small compared to number of parameters _θ_, as we still want the essence
_|_ _|_ _|_ _|_
of the learned guidance to reside in θ and each vp to be only a small “tweak” of this essence.[1]
At the same time, our overall aim is for the space spanned by the embeddings {vp}p∈P
to exactly represent the space of all reasonable strategies representable by our architecture.
The method guarantees that “relevant complementarity” arises for free: the loss decreases
whenever vp specializes the guidance in aspects relevant for solving p better than the generalist
would. There is still a risk, however, that not every point in the spanned space will correspond
to a reasonable heuristic for the ATP (imagine a point lying too far from any final learned
embeddings vp). We will take inspiration from the variational autoencoder models [2], which
face similar difficulties, and adapt their solution of adding an explicit clustering factor into the
loss function, if needed. Then, for deployment, equipped with a whole space of heuristics, we
will be able to sample new strategies on demand, taking advantage of the previously solved
clustered problem to navigate the corresponding space.
**Proof of Concept Implementation**
In the talk, we will report on experiments with the described idea in the context of reinforcement
learning for clauses selection in the ATP Vampire. This is a continuation of our work motivated
in an AITP presentation last year [8]. We train a feed-forward neural network to classify clauses
based on a small set of standard features such as age, weight, the number of variable occurrences,
or a distance to the goal. This setup is deliberately close to the human-designed heuristics (such
as age/weight alternation) aiming to answer the question: can these be re-learned just from the
prover experience? And deliberately simple to be competitive in real time evaluation.
Another distinguishing features of our approach is that we do not attempt to model the
prover’s state for influencing the decisions. This is again in line with the state-of-the-art humandesigned heuristics and allows us to store clauses, as usual, in a queue data structure (once
evaluated by the network) for an efficient “extract-min” retrieval. However, we treat both the
prover and the guiding agent stochastically [9] which brings some extra challenges.
Without aspiring to explain all the relevant details here (thus mainly for the aesthetic
enjoyment of the kind reader) a small teaser of our results is presented in Figure 1.
1At the opposite extreme, we would learn a separate network Np for every problem p, which would most
-----
## References
[1] J. Jakubuv, K. Chvalovsk´y, M. Ols´ak, B. Piotrowski, M. Suda, and J. Urban. ENIGMA
anonymous: Symbol-independent inference guiding machine (system description). In Au_tomated Reasoning - 10th International Joint Conference, IJCAR 2020, Paris, France,_
_July 1-4, 2020, Proceedings, Part II, vol. 12167 of Lecture Notes in Computer Science, pp._
448–463. Springer, 2020.
[2] D. P. Kingma and M. Welling. Auto-encoding variational bayes. In 2nd International
_Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16,_
_2014, Conference Track Proceedings, 2014._
[3] L. Kov´acs and A. Voronkov. First-order theorem proving and vampire. In Computer
_Aided Verification - 25th International Conference, CAV 2013, Saint Petersburg, Russia,_
_July 13-19, 2013. Proceedings, vol. 8044 of Lecture Notes in Computer Science, pp. 1–35._
Springer, 2013.
[4] M. Ols´ak, C. Kaliszyk, and J. Urban. Property invariant embedding for automated reasoning. In ECAI 2020 - 24th European Conference on Artificial Intelligence, 29 August-8
_September 2020, Santiago de Compostela, Spain, August 29 - September 8, 2020 - Includ-_
_ing 10th Conference on Prestigious Applications of Artificial Intelligence (PAIS 2020), vol._
325 of Frontiers in Artificial Intelligence and Applications, pp. 1395–1402. IOS Press, 2020.
[5] G. Reger. Boldly going where no prover has gone before. In Proceedings of the Second
_International Workshop on Automated Reasoning: Challenges, Applications, Directions,_
_Exemplary Achievements, ARCADE@CADE 2019, Natal, Brazil, August 26, 2019, vol._
311 of EPTCS, pp. 37–41, 2019.
[6] S. Schulz, S. Cruanes, and P. Vukmirovi´c. Faster, higher, stronger: E 2.3. In Proc. of the
_27th CADE, Natal, Brasil, number 11716 in LNAI, pp. 495–507. Springer, 2019._
[7] M. Suda. Vampire with a brain is a good ITP hammer. In Frontiers of Combining Systems
_- 13th International Symposium, FroCoS 2021, Birmingham, UK, September 8-10, 2021,_
_Proceedings, vol. 12941 of Lecture Notes in Computer Science, pp. 192–209. Springer, 2021._
[8] M. Suda. Elements of reinforcement learning in saturation-based theorem proving. In 7th
_Conference on Artificial Intelligence and Theorem Proving AITP 2022 – proceedings, 2022._
```
http://aitp-conference.org/2022/abstract/AITP_2022_paper_11.pdf.
```
[9] M. Suda. Vampire getting noisy: Will random bits help conquer chaos? (system description). In Automated Reasoning - 11th International Joint Conference, IJCAR 2022, Haifa,
_Israel, August 8-10, 2022, Proceedings, vol. 13385 of LNCS, pp. 659–667. Springer, 2022._
[10] T. Tammet. Towards efficient subsumption. In Automated Deduction - CADE-15, 15th
_International Conference on Automated Deduction, Lindau, Germany, July 5-10, 1998,_
_Proceedings, vol. 1421 of Lecture Notes in Computer Science, pp. 427–441. Springer, 1998._
[[11] T. Tammet. Towards efficient subsumption – full version: http://www.cs.cmu.edu/~fp/](http://www.cs.cmu.edu/~fp/courses/atp/cmuonly/T98.pdf)
```
courses/atp/cmuonly/T98.pdf, 1998. Accessed on May, 2023.
```
likely not generalize to previously unsolved problems.
-----
Figure 1: A heat map showing guiding model’s loss as a function of a two-dimensional problem
“tweak” embedding v. The loss was computed for a particular trace collected from solving the
TPTP problem GRA002+1. The loss achieves a minimum of around 5.0 in the central red patch.
In the regular grid overlay of the small circles, the actual success rate of Vampire guided with
a particular tweak is shown in grayscale (black means “never solved”, pure white is “solved in
10 out of 10 randomly seeded experimental runs”). The black circle that is not part of the grid
represents the actual current “tweak” vp for the mentioned problem. The plot demonstrates a
rare situation when a loss-minimizing version of the guidance does not lead to a success, while
other versions of the guidance do. This points to an imperfect match between the loss function
and the loss-minimizing network’s influence on the prover performance.
-----
| [
"Martin, Suda"
] | 2023-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Designing Games of Theorem Proving | N/A | null | # AITP 2018
#### The Third Conference on Artificial Intelligence and Theorem Proving
## Abstracts of the talks
#### March 25 – 30, 2018, Aussois, France
-----
**Preface**
This volume contains the abstracts of the talks presented at AITP 2018: The
Third Conference on Artificial Intelligence and Theorem Proving held on March
25–30, 2018 in Aussois, France.
We are organizing AITP because we believe that large-scale semantic processing and strong computer assistance of mathematics and science is our inevitable
future. New combinations of AI and reasoning methods and tools deployed over
large mathematical and scientific corpora will be instrumental to this task. We
hope that the AITP conference will become the forum for discussing how to get
there as soon as possible, and the force driving the progress towards that.
AITP 2018 consists of several sessions discussing connections between modern AI, ATP, ITP and (formal) mathematics. The sessions are discussion oriented
and based on 10 invited talks and 21 contributed talks.
We would like to thank the Aussois CNRS conference center for hosting
AITP 2018. Many thanks also to Andrei Voronkov and his EasyChair for their
support with paper reviewing and proceedings creation. The conference was
partly funded from the European Research Council (ERC) under the EU-H2020
projects SMART (no. 714034) and AI4REASON (no. 649043), and the Czech
project AI&Reasoning CZ.02.1.01/0.0/0.0/15 003/0000466 and the European
Regional Development Fund.
Finally, we are grateful to all the speakers, participants and PC members for
their interest in discussing and pushing forward these exciting topics!
March 25, 2018
Pittsburgh
Innsbruck
Stuttgart
Prague
Thomas C. Hales
Cezary Kaliszyk
Stephan Schulz
Josef Urban
-----
**Table of Contents**
Axiomatizing consciousness, with applications . . . . . . . . . . . . . . . . . . . . . . . . 6
_Henk Barendregt and Antonino Raffone_
Some Reflections on a Computer-aided Theory Exploration Study in
Category Theory (Extended Abstract) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
_Christoph Benzm¨uller and Dana Scott_
Project Proposal: Proof Guidance by Compression . . . . . . . . . . . . . . . . . . . . 9
_Lasse Blaauwbroek_
Hammer for Coq: Automation for Dependent Type Theory . . . . . . . . . . . . . 12
_Lukasz Czajka and Cezary Kaliszyk_
Computational Exploration of String Theory . . . . . . . . . . . . . . . . . . . . . . . . . 13
_Michael Douglas_
Revisiting SAD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
_Steffen Frerix and Peter Koepke_
Safe Reinforcement Learning via Formal Methods . . . . . . . . . . . . . . . . . . . . . 17
_Nathan Fulton and Andr´e Platzer_
Reasoning and Consciousness — Teaching a Theorem Prover to let its
Mind Wander . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
_Ulrich Furbach and Claudia Schon_
TacticToe: Learning to Prove with Tactics . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
_Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar and_
_Michael Norrish_
First Experiments with Watchlist Guidance on Mizar . . . . . . . . . . . . . . . . . . 23
_Zarathustra Goertzel, Jan Jakubuv and Josef Urban_
Guiding SMT Solvers with Monte Carlo Tree Search and Neural Networks 26
_St´ephane Graham-Lengrand and Michael F¨arber_
LET’S MAKE SET THEORY GREAT AGAIN! . . . . . . . . . . . . . . . . . . . . . . 30
_John Harrison_
Automation by Analogy, in Coq . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
_Alasdair Hill and Ekaterina Komendantskaya_
Enhancing ENIGMA Given Clause Guidance . . . . . . . . . . . . . . . . . . . . . . . . . 33
_Jan Jakubuv and Josef Urban_
Towards Machine Learning for Quantification . . . . . . . . . . . . . . . . . . . . . . . . . 36
_Mikolas Janota_
-----
Project Proposal: Reinforcement Learning for leanCoP . . . . . . . . . . . . . . . . 37
_Cezary Kaliszyk, Henryk Michalewski, Piotr Milos, Mirek Olsak and_
_Josef Urban_
Mizar in Isabelle for Formal Abstracts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
_Cezary Kaliszyk and Karol Pak_
Mechanizing Principia Logico-Metaphysica in Functional Type Theory . . . 42
_Daniel Kirchner, Christoph Benzm¨uller and Edward N. Zalta_
Toward AI for Lean, via metaprogramming . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
_Robert Lewis_
Cumulative Effects in Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
_Erik Martin-Dorel and Sergei Soloviev_
Machine comprehension of math problem text . . . . . . . . . . . . . . . . . . . . . . . . 49
_Takuya Matsuzaki and Noriko H. Arai_
If mathematical proof is a game, what are the states and moves? . . . . . . . . 50
_David McAllester_
Towards logics for neural conceptors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
_Till Mossakowski and Razvan Diaconescu_
Designing Games of Theorem Proving . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
_Yutaka Nagashima_
Who cares about Euclidean geometry? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
_Mirek Olˇs´ak_
ATP-guidance for Learning Premise Selection _. . . . . . . . . . . . . . . . . . . . . . . . ._ 57
_Bartosz Piotrowski and Josef Urban_
Dynamic Strategy Priority: Empower the strong and abandon the
weak. (Extended Abstract) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
_Michael Rawson and Giles Reger_
Implementation of Lambda-Free Higher-Order Superposition . . . . . . . . . . . 62
_Petar Vukmirovic_
Disambiguating ProofWiki into Mizar: First Steps _. . . . . . . . . . . . . . . . . . . . ._ 63
_Jiri Vyskocil and Josef Urban_
Building an Auto-Formalization Infrastructure through Text-Mining on
Mathematical Literature: Project Description . . . . . . . . . . . . . . . . . . . . . . . . . 66
_Qingxiang Wang_
-----
**Program Committee**
David Aspinall The University of Edinburgh
Jasmin C. Blanchette Vrije Universiteit Amsterdam
Joseph Corneli Department of Computing
Cameron Freer Remine
Ulrich Furbach University of Koblenz
Thomas Hales University of Pittsburgh
Sean Holden University of Cambridge
Geoffrey Irving Google
Moa Johansson Chalmers University of Technology
Cezary Kaliszyk University of Innsbruck
Michael Kohlhase Computer Science, FAU Erlangen-N¨urnberg
Ramana Kumar Datat61, CSIRO; and UNSW
Jens Otten University of Oslo
Stephan Schulz DHBW Stuttgart
Dawn Song University of California, Berkeley
Martin Suda Vienna University of Technology
Geoff Sutcliffe University of Miami
Charles Sutton The University of Edinburgh
Christian Szegedy Google
Josef Urban Czech Technical University in Prague
**Additional Reviewers**
Betzendahl, Jonas
Brown, Chad
F¨arber, Michael
M¨uller, Dennis
Sch¨afer, Frederick
-----
#### Axiomatizing consciousness, with applications
Henk Barendregt
Radboud University Nijmegen
Abstract. Consciousness is dened as a stream of congurations that
consist of three components: object, state, and action. The congurations
change in discrete moments in time, while their three components inuence each other recurrently, depending on the situation in the world.
Mindfulness enables modifying the stream of congurations by taking
states as objects of consciousness, on which an inuence can be exerted.
Several mental mechanisms can be understood by the axiomatic theory.
1. Memory, learning and deconditioning. 2. Mental suering, including
clinical phenomena. 3. The Church-Turing thesis on human computability. Applications of the understanding of mental suering can be found
in clinical practice and meditation. The validity of these practices are
increasingly investigated in cognitive (neuro)psychology.
-----
### Some Reflections on a Computer-aided Theory Exploration Study in Category Theory
Christoph Benzm¨uller[1] and Dana S. Scott[2]
1 University of Luxembourg, Luxembourg & Freie Universit¨at Berlin, Berlin, Germany
```
[email protected]
```
2 Visiting Scholar at University of California, Berkeley, USA
```
[email protected]
```
We present some reflections on the use of automated theorem proving and model finding
technology in the context of a recent theory exploration study in category theory [1, 2].
In our stepwise development of mutually equivalent axioms sets for category theory we
started out with a generalised notion of monoids. More precisely, the first axiom system in our
study was obtained by generalizing the standard axioms for a monoid to a partial composition
operation. In subsequent development steps we simplified this initial axioms set until we reached
the axioms as proposed by Scott [15] in the 1970s. We then compared this axioms set with an
alternative proposal by Freyd and Scedrov [11]. In the course of this comparison we revealed
a technical flaw for the axiom set of Freyd and Scedrov: either all operations, e.g. morphism
composition, are total or their axiom system is inconsistent. The repair for this problem is
quite straightforward and it essentially corresponds to the set of axioms proposed by Scott.
Our experiments were enabled by a semantical embedding of free logic [14] in classical higherorder logic (HOL), which we implemented in the proof assistant system Isabelle/HOL [12].
Free logic was utilised to support an adequate handling of partiality and undefinedness in the
modeling of morphism composition, and the domain and codomain operators. Our experiments
were substantially supported by automated reasoning technology, in particular, by the model
finder Nitpick [7] and by various automated theorem provers (CVC4 [10], E [13], Leo-II [3],
Satallax [8], SPASS [6], Z3 [9], etc.) integrated with Isabelle/HOL via Sledgehammer [5].
In our presentation at AITP 2018 we particularly want to reflect on the role these systems
played in our experiments. This is of practical and also of epistemological relevance, since these
systems, as we will evidence, can indeed substantially foster the gain of new knowledge. We
will therefore highlight relevant points in our stepwise development in which these systems, in
particular, the model finder Nitpick, supported the gain of intuition by providing countermodels
to still slightly flawed axioms or definitions. And the theorem provers supported the detection
of the constricted inconsistency, in addition to the important, albeit more traditional, role they
played in confirming equivalences between different axioms sets as soon as we arrived at their
correct formulations.
Despite our reassuring overall teamwork experience, which involved a domain expert (Scott),
a theorem proving expert (Benzm¨uller) and the Isabelle/HOL framework, we also collected
several critical remarks pointing to a range of improvement opportunities. Some of these improvement opportunities are of technical nature, others may include theoretical aspects. For
example, Nitpick should be improved by devising and implementing better readable and eventually more domain specific representations of models and countermodels. In our experiments
such conversions were in fact laboriously handled by hand by Benzm¨uller and the results were
then communicated by email to Scott. In some cases calls of external theorem provers via
Sledgehammer resulted in technical error messages, which may demotivate non-expert users,
and when the theorem provers succeeded, then their proofs could most of the time not be converted into informative Isar style proofs. The constricted inconsistency result, for example, had
-----
Some Reflections on a Computer-aided Theory Exploration Study in Category Theory Benzmller and Scott
to be reconstructed by hand to obtain an human-friendly Isar style proof (see [4] for a similar
experience in a different context).
Hence, our successful experiments, in which automated reasoning tools integrated in Isabelle/HOL have demonstrated their capabilities beyond mere proof verification, still required
a close interaction between three players: a domain expert, a theorem proving expert and the
Isabelle/HOL proof assistant. The challenge in fact still is to get the second player completely
out of the loop, without requiring the first player to adopt a nearly identical level of technical
expertise in a resource-intensive, laborious manner.
#### References
[1] C. Benzm¨uller and D. Scott. Automating free logic in Isabelle/HOL. In G.-M. Greuel, T. Koch,
P. Paule, and A. Sommese, editors, Mathematical Software – ICMS 2016, volume 9725 of LNCS,
pages 43–50, Berlin, Germany, 2016. Springer.
[2] C. Benzm¨uller and D. Scott. Axiomatizing category theory in free logic. CoRR, abs/1609.01493,
2016. This work is currently submitted for journal publication.
[3] C. Benzm¨uller, N. Sultana, L. C. Paulson, and F. Theiss. The higher-order prover Leo-II. Journal
_of Automated Reasoning, 55(4):389–404, 2015._
[4] C. Benzm¨uller and B. Woltzenlogel Paleo. The inconsistency in G¨odel’s ontological argument: A
success story for AI in metaphysics. In IJCAI 2016, volume 1-3, pages 936–942. AAAI Press, 2016.
[5] J. C. Blanchette, S. B¨ohme, and L. C. Paulson. Extending Sledgehammer with SMT solvers.
_Journal of Automated Reasoning, 51(1):109–128, 2013._
[6] J. C. Blanchette, A. Popescu, D. Wand, and C. Weidenbach. More SPASS with Isabelle - Superposition with Hard Sorts and Configurable Simplification. In ITP, volume 7406 of LNCS, pages
345–360. Springer, 2012.
[7] J.C. Blanchette and T. Nipkow. Nitpick: A counterexample generator for higher-order logic based
on a relational model finder. In ITP 2010, number 6172 in LNCS, pages 131–146. Springer, 2010.
[8] C. E. Brown. Satallax: An automatic higher-order prover. In Automated Reasoning – IJCAR,
volume 7364 of LNCS, pages 111–117. Springer, 2012.
[9] L. M. de Moura and N. Bjørner. Z3: An Efficient SMT Solver. In TACAS, volume 4963 of LNCS,
pages 337–340. Springer, 2008.
[10] M. Deters, A. Reynolds, T. King, C. W. Barrett, and C. Tinelli. A tour of CVC4: how it works,
and how to use it. In FMCAD, page 7. IEEE, 2014.
[11] P. Freyd and A. Scedrov. Categories, Allegories. North Holland, 1990.
[12] T. Nipkow, L. C. Paulson, and M. Wenzel. Isabelle/HOL: A Proof Assistant for Higher-Order
_Logic. Number 2283 in LNCS. Springer, 2002._
[13] S. Schulz. System description: E 1.8. In LPAR-19, volume 8312 of LNCS, pages 735–743. Springer,
2013.
[14] D. Scott. Existence and description in formal logic. In R. Schoenman, editor, Bertrand Russell:
_Philosopher of the Century, pages 181–200. George Allen & Unwin, London, 1967. (Reprinted with_
additions in: Philosophical Application of Free Logic, edited by K. Lambert. Oxford Universitry
Press, 1991, pp. 28 - 48).
[15] D. Scott. Identity and existence in intuitionistic logic. In M. Fourman, C. Mulvey, and D. Scott,
editors, Applications of Sheaves: Proceedings of the Research Symposium on Applications of Sheaf
_Theory to Logic, Algebra, and Analysis, Durham, July 9–21, 1977, volume 752 of Lecture Notes_
_in Mathematics, pages 660–696. Springer, 1979._
-----
### Project Proposal: Proof Guidance by Compression
Lasse Blaauwbroek[∗]
Czech Institute for Informatics, Robotics and Cybernetics,
Czech Republic
```
[email protected]
```
**Abstract**
We propose a method for guiding the proof search of theorem provers based on compression. Given a pre-existing corpus of proof states and corresponding helpful proof steps,
we propose compression as a means to find the state that is closest to the current proof
state, thereby giving us a likely next proof step. We compare two states by treating them
as strings, and then compressing their concatenations. If the states are very similar, we
may expect the compressed string to be much smaller due to information sharing between
the states. This can then be used to create a distance metric between proof states.
**Normalized Compression Distance** Let C be a reference compression algorithm such that
_C(s) denotes the compressed version of s. As first done by Cilibrasi and Vitnyi [1] we will be_
utilizing this compressor to build a similarity metric. The intuitive idea behind this is that
if two string s and t are similar, the compressor can use the information in s to reduce the
compression size of t. Hence, we would expect |C(st)| to be much smaller than |C(s)| + |C(t)|,
and when s and t are equal we expect |C(st)| to approach |C(s)| + b where b is some small
constant used to encode the duplicate information. On the other hand, when s and t share no
information, |C(st)| is expected to be equal or larger than |C(s)| + |C(t)|.
We now define a distance function, called the Normalized Compression Distance (NCD),
which is formed by normalizing |C(st)| such that longer strings are not penalized.
NCDC(s, t) =
_[|][C][(][st]max([)][| −]_ [min(]C(s)[|][C], [(]C[s][)]([|]t[,])[ |][C]) [(][t][)][|][)]
_|_ _|_ _|_ _|_
Let us for a moment consider the perfect compressor K, of which the compression size is called
the Kolmogorov complexity. It is known that NCDK satisfies all the standard laws of a distance
metric [4]. More than that, it can be shown that NCDK is in some sense the “universal” metric
because it simultaneously captures all other (computable) metrics. Using this metric would
therefore subsume all other metrics, effectively solving all problems in Artificial Intelligence.
Unfortunately, it is well-known that the Kolmogorov complexity is an undecidable function,
thereby crushing our dream of perfect AI. We can, however, still hope to create an approximation
to this metric by using existing compression algorithms.
**Resolution Selection** In this proposal, we endeavour to apply the NCD to mathematical
formulas. Our hope is that a good compression scheme will automatically find similar structures
within formulas, which would make for a good predictor of similar formulas. However, the fact
that formulas are usually represented by relatively short strings is a practical problem here.
Most compression algorithms are optimized for large bodies of text, and often have a substantial
constant overhead (i.e. to store dictionaries). Therefore, these algorithms perform rather poor
on short strings (often inflating the size of the string instead of deflating it).
_∗This work was supported by the European Regional Development Fund under the project AI&Reasoning_
(reg. no. CZ.02.1.01/0.0/0.0/15 003/0000466)
-----
Project Proposal: Proof Guidance by Compression Blaauwbroek
In order to combat this problem, we propose not to compare individual formulas with each
other, but rather to compare entire proof states that represent partial proofs of theorem provers.
These states should generally be much larger and therefore better suited for compression. Although this idea should in principle be widely applicable, we have chosen the leanCoP theorem
prover [3] for our initial experiments. This prover tries to derive a contradiction by applying
extension and reduction inferences to a connection tableau. In order to apply an inference rule
on a leaf of the tableau, the prover must choose from a list of possible clauses to perform a
rule with. To make this choice, one can try to look at the current branch of the tableaux. One
option is to try and find a previously encountered proof situation that is similar to the current
situation and for which the best next inference is known. We propose to find this similar proof
state using the NCD. For this, a string is created that is representative of the proof state, by
simply concatenating the literals on the current proof branch together with the list of possible
clauses to perform resolution with.
**Prediction by Partial Matching** Selecting a good compression algorithm for the task
described above is not easy. We wish the algorithm to have a small constant size overhead, and
one that compresses good on small texts. This means that algorithms based on dictionaries are
not a good fit. The state of the art in compression is based on Prediction by Partial Matching
(PPM) [2]. In short, the idea is to process the input as a stream. While processing, a predictive
model is built that tries to guess what the next character(s) in the stream are. Whenever the
guess is correct, these characters do not have to be included in the output stream. For decoding
one can then build the same predictive model to obtain the missing characters. This approach
has the advantage that the model does not need to be stored in the output stream, making the
constant overhead effectively zero. The predictive models of general purpose compressors are
generally geared towards (human) text. We are looking into optimizing the model for formulas
by making use of the tree-structure found in formulas.
**Efficiency** The task of finding the most similar string to a given string s out of a large pool of
candidates can be very computationally expensive because the compressor needs to be invoked
for every candidate in the pool. We need a more efficient method of finding the most similar
string in the pool. It can be shown that under reasonable assumptions, the NCD of an imperfect
compressor approximately admits the laws of a metric [1]. However, the NCD does not give us
a vector space to work with to speed up this algorithm.
We propose a method that imposes a graph on the set of strings in the pool, such that strings
that are similar are close neighbors of each other in the graph. The idea is to approximate an
_n-dimensional vector space. The n neighbors of a string s are chosen such that they are as_
close as possible to s while being as far as possible from each other. This ensures that different
neighbors are as orthogonal to each other as possible. We define this graph as follows. Let S
be the set of strings in our pool. The notation Sn, denotes the set containing all subsets of S
of size n.
_Sn = {X ⊆_ _S | |X| = n}_
Now, let s ∈ _S. We define the set of successors of s as follows._
_NCD(t, u)_
out(s) = arg max _t,u∈X_
_X∈Sn_ _tP∈X_ _NCD(s, t)_
P
Equipped with this graph, we propose a hill-climbing algorithm to find the most similar string
in the pool. We simply start with a random node in the graph, and find the neighbor that
provides the most improvement in similarity. This process is repeated until a local optimum is
reached.
-----
Project Proposal: Proof Guidance by Compression Blaauwbroek
#### References
[1] Rudi Cilibrasi and Paul M. B. Vit´anyi. Clustering by compression. CoRR, cs.CV/0312044, 2003.
[2] John G. Cleary and Ian H. Witten. Data compression using adaptive coding and partial string
matching. IEEE Trans. Communications, 32(4):396–402, 1984.
[3] Cezary Kaliszyk, Josef Urban, and Jiˇr´ı Vyskoˇcil. Certified connection tableaux proofs for HOL
Light and TPTP. In Xavier Leroy and Alwen Tiu, editors, Proc. of the 4th Conference on Certified
_Programs and Proofs (CPP’15), pages 59–66. ACM, 2015._
[4] Ming Li, Xin Chen, Xin Li, Bin Ma, and Paul MB Vit´anyi. The similarity metric. IEEE transactions
_on Information Theory, 50(12):3250–3264, 2004._
-----
#### Hammer for Coq: Automation for Dependent Type Theory
ukasz Czajka and Cezary Kaliszyk
University of Innsbruck
Abstract. We present an architecture of a full hammer for dependent
type theory together with its implementation for the Coq proof assistant.
A key component of the hammer is a proposed translation from the Calculus of Inductive Constructions, with certain extensions introduced by
Coq, to untyped rst-order logic. The translation is suciently sound
and complete to be of practical use for automated theorem provers. We
also introduce a proof reconstruction mechanism based on an eauto-type
algorithm combined with limited rewriting, congruence closure and some
forward reasoning. The algorithm is able to re-prove in the Coq logic
most of the theorems established by the ATPs. Together with machinelearning based selection of relevant premises this constitutes a full hammer system. The performance of the overall procedure is evaluated in a
bootstrapping scenario emulating the development of the Coq standard
library. For each theorem in the library only the previous theorems and
proofs can be used. We show that 40.8% of the theorems can be proved
in a push-button mode in about 40 seconds of real time on a 8-CPU
system.
-----
### Computational Exploration of String Theory
_Michael R. Douglas_
keywords: computational mathematics, string theory, theoretical physics
**Abstract**
Superstring theory is a physical framework that unifies quantum mechanics and general relativity in ten-dimensional space-time. By assuming that the extra six dimensions are a small
compact manifold, one can derive theories of physics in four dimensions that can reproduce all
the experiments and observations to date, as well as predicting new physics such as supersymmetry and dark matter.
The detailed predictions of the theory depend on what we assume for the extra dimensions –
their topology and metric, and the existence and location of a variety of extra fields and physical
objects such as “branes.” Given a set of these choices one can derive a potential function, and
each local minimum of one of these potential functions is referred to as a “string vacuum.”
Given a choice of vacuum, much is known about how to compute detailed physical predictions
such as the spectrum of particles and their masses, data which can be encoded in a relatively
compact structure called an effective field theory (EFT).
The choices and minima which determine a vacuum can be classified so that the problem of
enumerating vacua becomes combinatorial. This classification and the computations involve
a great deal of mathematics, mostly group theory and algebraic geometry, but also Riemannian geometry and many other fields. More specifically the required mathematics includes the
theories of Calabi-Yau manifolds, toric geometry and polytopes, resolution of singularities,
sheaf cohomology, Hodge theory, moduli spaces, automorphic functions and forms, and many
numerical techniques.
The number of string vacua is extremely large, both because of the number of combinatorial
choices involved, and on physical grounds. Certain physical problems – most importantly the
cosmological constant problem – can only be solved if the number of vacua is greater than
roughly 10[120]. There are mathematical arguments that the total number of vacua is finite, with
estimates ranging from 10[500] to 10[250][,][000]. The full structure is not just a set of this number
of vacua, each with an associated EFT, but is actually a weighted graph whose nodes are the
vacua. This graph describes quantum tunneling processes which connect the vacua, which
occur in cosmological dynamics and generate a Markov process on the set of vacua. This
structure is known as the string landscape.
Physicists have been using computational methods to study the string landscape since its beginnings. A celebrated example developed in the 90’s and still of central importance is the
-----
Kreuzer-Skarke database of reflexive polytopes, which determines the set of six-dimensional
Calabi-Yau manifolds which can be realized as toric hypersurfaces. As time goes on, more
and more sophisticated computational techniques are being employed. Computational algebra
systems such as GAP, Macaulay 2 and Sage are regularly used, and some of this work has been
done in collaboration with computational algebraic geometers. A new trend is to use machine
learning and data science techniques – this was the subject of a recent meeting String Theory
**and Data Science at Northeastern University. Repositories for the code and data generated by**
this research are now in the process of being created.
In this talk we give an introduction to this area for computer scientists, sketching the ideas and
some of the basic questions we hope to answer. We will also present some ideas about what
a really satisfactory platform for this research and indeed for any repository of mathematical
knowledge would look like. In a nutshell, it should be a distributed and collaborative knowledge repository, in some ways like Wikipedia, but in which the basic units of knowledge are
not expressed in natural language but instead in formal languages. This raises many difficult
problems of how to maintain formal consistency in the face of distributed and asynchronous
updates at all levels, from the foundational definitions on up. We will identify some of these
problems and hope to stimulate discussion of them.
-----
### Revisiting SAD
Steffen Frerix and Peter Koepke
University of Bonn, Germany
Email: [email protected] and [email protected]
2nd December 2017
The System for Automated Deduction (SAD) by Andrei Paskevich et.al. (see [3] and
```
http://nevidal.org/sad.en.html) combines natural language input with first-order proof checking.
```
Mathematical texts are expressed in the controlled mathematical language ForTheL, parsed into a firstorder based internal format, and checked for logical correctness by a ”reasoner” together with some
standard automated theorem prover. The following excerpt of a document accepted by SAD demonstrates that ForTheL/SAD can come close to standard mathematical language and argumentation.
**Theorem The set of prime numbers is infinite.**
**Proof** Let A be a finite set of prime numbers. Take a function p and a number r such
that p lists A in r steps. ran p ⊆ N+ . _ri=1_ _[p][i][ ̸][= 0. Take][ n][ =][ Q]i[r]=1_ _[p][i][ + 1.][ n][ is nontrivial.]_
Take a prime divisor q of n.
Let us show that q is not an element of A. Assume the contrary. Take i such that
Q
(1 ≤ _i ≤_ _r and q = pi). pi divides_ _i=1_ _[p][i][ (by MultProd). Then][ q][ divides 1 (by DivMin).]_
Contradiction. qed.
Hence A is not the set of prime numbers. _2_
[Q][r]
The typesetting is done by LATE[X;] standard ForTheL texts allow _patterns_ like
_\designed LProduct{ApT}{E[X-macro outputs a pretty-printed]1}{r} that SAD reads as a ternary term with arguments[ Q][r]i=1_ _[p][i][.]_ _p, 1, r, whereas a custom-_
Some ”natural mathematics” features of SAD are present in the example: anonymous variables
like ”set of prime numbers”, soft type specifications in adjectives like ”prime” or ”finite”, user-specified
linguistic and symbolic patterns like ”x divides y”, proof methods like induction, case splits or contradiction. SAD employs a light logical background system with efficient and mathematically well-motivated
handling of premises, definitions and local theses. Moreover, an SAD verification includes ontological
checking [6], resembling typechecking for typed languages, which, e.g., allows a correct treatment of
partial functions.
The current SAD-system is an admirable prototype based on the doctoral project of Andrei Paskevich
[3]. It was tuned to check some impressive miniatures (e.g. [5]), but in general it has severe limitations. Longer texts, like a common foundation file for sets, functions, numbers etc. cannot be checked
efficiently. Therefore basic notions for sets and functions have to be reimplemented in every example
text; there is only minimal and inefficient built-in support for sets and none for functions. While the
input language allows formalizations close to natural language, the parser does not check for grammatical correctness and also accepts completely ungrammatical texts. Furthermore, the range of allowed
variables and constants is limited, the ForTheL formats do not interact well with LATE[X.]
Reimplementing basic notions each time without strong correctness checks makes the system insecure. Inconsistencies and unintended interpretations can easily be introduced accidentally. This can be
especially problematic if one wishes to combine existing ForTheL texts.
SAD has not been developed further for ten years. The impressive features and examples, however,
motivate us to go into the system again, identify and remedy some weaknesses, and explore possibilities
for further development. In his MA project, the first author has analysed the code and studied the
internal behaviour. Already simple modifications of the thesis handling and of internal reasoning
methods allow the checking of much longer texts. It appears possible that a strengthened SAD is able
to verify hierarchies of texts rather than isolated examples.
-----
To write interdependent mathematical texts necessitates a common foundation. In SAD, the builtin support for sets, or rather Fregian classes, is unintuive and allows the verification of undesirable
statements like “There exists a set equal to { set x | x /∈ x }”. It is not sufficient to write a common
foundational, set-theory inspired ForTheL-text on sets, functions, numbers etc., since ForTheL does
not support schemas of axioms or theorems. The ubiquity of sets and functions in mathematics really
demands a special implementation, on which we are working.
The basis of SAD is first order logic. The ForTheL language requires soft, “linguistic” typings or
sortings like “Let f be a function”; quantified variables have to be typed. Types are interpreted as
unary predicates in the background logic. The numerous recurrences of such predicates unfortunately
clutter the input of the backend ATP. Therefore the verification process should make use of some sorted
logic. There have been ATP developments towards sorted first order logic: the TPTP language has
added the input forms TFF0 [4] and TFF1 [1] which are now supported by powerful ATPs. We are
currently working on a suitable logical setting and mechanisms that may safely and fruitfully replace
unary types by sorts. The implementation of sets and functions in particular should greatly benefit
from sorting.
In ForTheL formalizations, one chooses certain notions as undefined base notions depending on the
level of abstraction one is working on. While this can produce elegant and impressive example texts, it
makes the question of how to import or export theorems and definitions between documents difficult.
A theory of relations between ForTheL texts is necessary to provide the means to build hierarchical
libraries.
The naturality of ForTheL is a strong point of SAD that we want to further improve on. We are
working on some desirable language extensions and LATE[X-compatible symbolic extensions. We shall]
also examine whether proper natural language parsers from computer linguistics should replace the ad
hoc parsing used by SAD or be used as a preprocessor. This work is guided by our experiences from
the Naproche project [2].
In our talk we shall survey the classical and improved SAD system, including examples, and we shall
discuss some theoretical aspects. Our experiments make it conceivable that textbook mathematics like
an introduction to number systems can be formulated and feasably be checked by an improved SAD.
The missing resemblance between formalized mathematics and real mathematics seems to be a reason
why the general mathematical community is reluctant to integrate proof assistants into their work [7].
We view our research as a contribution to the question whether formal mathematics can be made more
acceptable by using ForTheL-like natural mathematical languages and SAD-like reasoning.
#### References
[1] Blanchette, J. C., and Paskevich, A. TFF1: The TPTP typed first-order form with rank-1
polymorphism. In International Conference on Automated Deduction (2013), Springer, pp. 414–420.
[2] Cramer, M., Koepke, P., K¨uhlwein, D., and Schr¨oder, B. The Naproche system. Intelligent
_Computer Mathematics, Springer LNCS, ISBN (2009), 978–3._
[3] Paskevych, A. M´ethodes de formalisation des connaissances et des raisonnements math´ematiques:
_aspects appliqu´es et th´eoriques. PhD thesis, Universit´e Paris 12, 2007._
[4] Sutcliffe, G., Schulz, S., Claessen, K., and Baumgartner, P. The TPTP typed first-order
form with arithmetic. In International Conference on Logic for Programming Artificial Intelligence
_and Reasoning (2012), Springer, pp. 406–419._
[5] Verchinine, K., Lyaletski, A., and Paskevich, A. System for Automated Deduction (SAD):
a tool for proof verification. Automated Deduction–CADE-21 (2007), 398–403.
[6] Verchinine, K., Lyaletski, A., Paskevich, A., and Anisimov, A. On correctness of mathematical texts from a logical and practical point of view. In International Conference on Intelligent
_Computer Mathematics (2008), Springer, pp. 583–598._
[7] Wiedijk, F. The QED manifesto revisited. Studies in Logic, Grammar and Rhetoric 10, 23 (2007),
121–133.
-----
### Talk on Safe Reinforcement Learning via Formal Methods
Nathan Fulton and Andr´e Platzer
Carnegie Mellon University, Pittsburgh, PA, U.S.A.
_{nathanfu, aplatzer}@cs.cmu.edu_
**Abstract**
This Contributed Talk will present our recently published [3] work on applying verification technologies to safe reinforcement learning and recent extensions to this work. We
show how to use formally verified hybrid systems models to precisely sandbox reinforcement learning algorithms in robotics contexts. We prove that our approach preserves safety
guarantees, and demonstrate that we retain the empirical performance benefits provided
by reinforcement learning. We also explore various points in the design space for these jus_tified speculative controllers in a simple model of adaptive cruise control for autonomous_
cars. Finally, we discuss how to use verification results even when model inaccuracies are
detected at run time.
The work presented in [3] and discussed in this talk provides a general approach toward provably safe learning that is amenable to extension via different learning algorithms,
approaches toward formal verification, and methods for achieving safe reinforcement learning.
#### 1 Introduction and Background
_Cyber-physical systems (CPSs) are difficult to get right, which is why formal verification_
provides rigorous ways of establishing the safety of controllers with respect to a physical
model of the system under control. KeYmaera X and other hybrid systems verification
tools provide a way of obtaining safety results for cyber-physical systems [2].
Difficulties with formally verified controllers arise whenever there are discrepancies
between the verified models and the real implementation. Such discrepancies between
model and reality are inevitable in physical systems operating in open environments [4].
_Reinforcement learning (RL) [7] provides ways of learning controllers that tend to per-_
form well without the need for a perfect model – or even any model at all. Most approaches
toward reinforcement learning provide no guarantee about the safety of the learned controller or about the safety of actions taken during learning. Absence of safety guarantees
become a crippling problem when reinforcement learning is applied to safety-critical CPSs.
Unfortunately, testing alone is an intractable approach toward system verification and
validation.
This talk will present a technique, called Justified Speculative Control (JSC) [3], for
transferring formal verification results to controllers obtained via reinforcement learning.
We also discuss some experiments which demonstrate that formal verification results can
be used to help guide a system back into modeled portions of state-space.
#### 2 Summary of Results
Our approach toward safe learning combines hybrid systems verification, runtime monitoring, and reinforcement learning. Justified Speculative Control (JSC) extends model-based
safety theorems about hybrid systems to policies obtained through reinforcement learning.
The approach begins with a hybrid system specified in Differential Dynamic Logic [5, 6]
and verified in KeYmaera X [2]. The verified system has the general shape
-----
easychair: Running title head is undefined. easychair: Running author head is undefined.
_init →_ [{ctrl; plant}[∗]]safe
where init describes a set of initial conditions, ctrl is a (typically non-deterministic)
discrete program describing all possible control actions, plant is a system of ordinary differential equations describing the physical behavior of the system, and safe is a description
of the safe states for the system. The entire formula states that if the system starts in the
set init, then after any arbitrary number of control actions followed by physical movement,
the system will always remain within the set safe.
Given such a verified model, we generate runtime monitors that monitor both the controller (CM) and the entire model (MM) for deviation from the verified model. These
monitors are generated using the ModelPlex algorithm implemented as a tactic in KeYmaera X [1, 4].
The JSC algorithm takes a generic reinforcement learning algorithm A and constrains
this algorithm using our verified monitors. As long as the model monitor returns True,
A may only optimize over actions for which the controller monitor CM returns True.
Otherwise, when a model violation is detected, A is justified in optimizing over the entire
state space. This talk will present some basic results about the JSC algorithm, discussed
at greater length in [3]:
_• When environments are accurately modeled (i.e., when the hybrid systems model_
accurately characterizes observed sensor inputs), formal verification results for the
model from which the controller and model monitors are derived transfer to the
learning process. Verification results also transfer to extracted policies.
_• When model violations are detected, transforming boolean-valued model monitors_
_MM to real-valued monitors provides an experimentally promising approach toward_
incorporating verification results into safe reinforcement learning. Intuitively, these
real-valued monitors drive the learning process back into modeled state-space.
#### References
[1] Nathan Fulton, Stefan Mitsch, Brandon Bohrer, and Andr`e Platzer. Bellerophon:
Tactical theorem proving for hybrid systems. In Interactive Theorem Proving 2017,
2017.
[2] Nathan Fulton, Stefan Mitsch, Jan-David Quesel, Marcus V¨olp, and Andr´e Platzer.
KeYmaera X: An axiomatic tactical theorem prover for hybrid systems. In Conference
_on Automated Deduction, 2015._
[3] Nathan Fulton and Andr´e Platzer. Safe Reinforcement Learning via Formal Methods: Toward Safe Control Through Proof and Learning. In The Thirty Second AAAI
_Conference on Artificial Intelligence, 2018._
[4] Stefan Mitsch and Andr´e Platzer. ModelPlex: Verified runtime validation of verified
cyber-physical system models. Form. Methods Syst. Des., 49(1):33–74, 2016. Special
issue of selected papers from RV’14.
[5] Andr´e Platzer. The complete proof theory of hybrid systems. In LICS, pages 541–550.
IEEE, 2012.
[6] Andr´e Platzer. Logics of dynamical systems. In LICS, pages 13–24. IEEE, 2012.
[7] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction.
MIT Press, Cambridge, MA, 1998. A Bradford Book.
-----
### Teaching a Theorem Prover to let its Mind Wander
Ulrich Furbach
Universit¨at Koblenz-Landau
```
[email protected]
```
Claudia Schon[∗]
Universit¨at Koblenz-Landau
```
[email protected]
```
Automated Reasoning in commonsense contexts is challenged by the need of taking into account
very large knowledge sources. The authors addressed this aspect in various previous papers on the topic
of natural language query answering ([3, 4]) where an engineering approach is taken, putting together
what is available and what looks helpful. In this contribution we are discussing instead of this ad hoc
methods the use of a framework from psychology.
During an ongoing series of workshops around the topic Bridging the Gap between Human and
_Automated Reasoning ([6]) we are aiming at learning from human reasoning on how to reason efficiently_
even when large and different knowledge sources are necessary. This ability of humans seems to be
strongly related to consciousness. The problem of consciousness and in particular the question whether
an AI system can be conscious is discussed in depth in philosophy, neuroscience or psychology. There
is one very well accepted theory about human consciousness, namely the Global Workspace Theory
introduced by Bernhard J. Baars ([1]). Baars’ motivation for his theory are the following observation in
human reasoning and behavior: The human brain ’suffers’ from a kind of limited capacity, e.g. immediate
memory, selectivity of attention and the observation that we rarely can do two demanding actions at a
time. The situation, however, is different, when we observe the brain directly: a huge neural network, a
lot of layers and connections and various parts that are specialized to different tasks, e.g. recognition of
visual information, controlling body functions or language recognition and generation. All this is highly
parallel and most of the time unconscious. One, at least for our community astonishing capability is, the
use of the long-term memory. We do not know its size, but as Baars is pointing out, if we pay attention
to 10,000 different pictures during several days, we can recognize each of them without attempting to
memorize them. Memory search performed by the brain is highly efficient and it looks like we have a
huge domain of knowledge, that is unconscious and we get access to it by consciousness. And this is
exactly what Baars’ Global Workspace Theory is aiming to model – Consciousness is the key to reason
efficiently and goal oriented! Following the attempt by Baars and his co-workers, the question whether
AI systems can be conscious has to be rewritten: Can we allow ourselves to design AI systems without
consciousness? In the following we sketch the Global Workspace Theory (GWT) and we try to relate
it to an automated reasoning context. We furthermore present an approach on how to let the theorem
prover Hyper [2] ‘let its mind wander‘, which is inspired by GWT.
**Global Workspace Theory** GWT is usually explained by describing it as a Working Theatre of Con_sciousness. We can think about the brain a theatre consisting of a stage, an attentional spotlight shining_
at the stage, actors which represent the contents, an audience and some people behind the scene. Lets
look at the parts in more detail and relate it to automated reasoning aspects.
_The stage. The working memory consist of verbal and imagined items. Most parts of the working_
memory are in the dark, but there are a few active items, usually the short time memory. – We consider
the clause set which is the input to a reasoning system together with the derived clauses to be the stage.
_The spotlight of attention. This bright spotlight helps in guiding and navigating through the working_
memory. Humans can shift it at will, by imagining things or events. This is the part of the clause set
which is currently processed by the theorem prover.
_The actors correspond to the application of inference rules on the set of clauses currently processed_
by the theorem provers. The result of the actors’ actions correspond to new formulae derived by an
inference step.
_∗Work supported by DFG SCHO 1789/1-1 ‘CORG’._
-----
Figure 1: Hyper’s mind wandering.
_Context behind the scene. Behind the scenes, the director coordinates the show and stage designers_
and make-up artists prepare the next scenes. – We consider the reasoner and its control as a director.
_The audience. According to Baars, the audience represents the vast collection of specialized knowl-_
edge. It can be considered as a kind of long-term memory and consists of specialized properties, which
are unconscious. Navigation through this part of the knowledge is done mostly unconscious. – This is
the background knowledge, like Cyc, Yago or knowledge graphs.
**Let Hyper’s mind wander** Proof tasks in the area of commonsense reasoning usually are different
from classical proof tasks. Due to incomplete background knowledge, one cannot expect a theorem
prover to find a proof for a certain problem. In this area, the task is rather to start from an initial problem
description and to perform as many inferences as possible in order to approach a goal. This is why
computed models are of special interest in this area. The Hyper theorem prover is a hypertableau based
theorem prover which is able to construct a model for satisfiable clause sets.
The Working Theatre of Consciousness metaphor can be used to give Hyper the ability to let its mind
wander, similar to the way humans move their attention. Fig. 1 provides an overview of the interplay
between Hyper, background knowledge and selection mechanisms that is used to accomplish this goal.
Starting with an initial clause set S, background knowledge appropriate for this clause set is selected (for
example by SInE) and fed into the theorem prover (indicated by the dashed line) together with the clause
set. Hyper constructs a model for this input which is indicated by number 1 in Fig. 1. The open branches
of this tableau correspond to new knowledge derived by Hyper. We use this new knowledge to select new
clauses from the large background knowledge. In order to allow for some creative variation, we include
ConceptNet [5] into this step by looking up some entities which are related to some of the predicate
symbols in this set and selecting background knowledge for these entities as well. In addition to that, we
bridge vocabularies as described in [4]. This new background knowledge is then fed into Hyper together
with the freshly inferred knowledge. Hyper constructs a second model (number 2 in Fig. 1), which again
contains open branches. The new knowledge in these open branches is again used to select appropriate
background knowledge. This process is repeated as long as desired. During the whole process, Hyper
is provided with different sets of clauses. Each of them representing the current focus of Hyper’s mind.
The process of searching for new background knowledge can be seen as a shift of his mind.
This proposal presents first ideas about incorporating aspects from consciousness research into automated reasoning. We are currently setting up the infrastructure for extensive experiments. Results will
be available until the workshop.
-----
#### References
[1] B. J. Baars. In the Theatre of Consciousness. Global Workspace Theory, A Rigorous Scientific Theory of
Consciousness. Journal of Consciousness Studies, 4(4):292–309, 1997.
[2] M. Bender, B. Pelzer, and C. Schon. System description: E-KRHyper 1.4 - extensions for unique names and
description logic. In M. P. Bonacina, editor, CADE-24, LNCS, 2013.
[3] U. Furbach, I. Gl¨ockner, and B. Pelzer. An application of automated reasoning in natural language question
answering. AI Commun., 23(2-3):241–265, 2010.
[4] U. Furbach and C. Schon. Commonsense reasoning meets theorem proving. In MATES, volume 9872 of
_Lecture Notes in Computer Science, pages 3–17. Springer, 2016._
[5] H. Liu and P. Singh. ConceptNet — a practical commonsense reasoning tool-kit. BT Technology Journal,
22(4):211–226, Oct. 2004.
[6] C. Schon and U. Furbach, editors. Proceedings of the Workshop on Bridging the Gap between Human and
_Automated Reasoning co-located with 25th International Joint Conference on Artificial Intelligence (IJCAI_
_2016), New York, USA, July 9, 2016, volume 1651 of CEUR Workshop Proceedings. CEUR-WS.org, 2016._
-----
#### TacticToe: Learning to Prove with Tactics
Thibault Gauthier, Cezary Kaliszyk, Josef Urban, Ramana Kumar, and
Michael Norrish
1 University of Innsbruck
2 Czech Technical University
3 DeepMind
4 Data61
Abstract. The talk will discuss the tactical prover TacticToe implemented on top of the HOL4 interactive theorem prover. TacticToe learns
from human proofs which mathematical method is useful in a particular
proof situation. This knowledge is then used in a Monte Carlo tree search
algorithm that explores the most promising tactic-level proof paths. On
a single CPU, with a time limit of 60 seconds, TacticToe proves 66.4% of
7164 theorems in HOL4's standard library whereas Eprover solves 34.5%.
The success rate rises to 69.0% by combining the results of TacticToe
and Eprover.
-----
### First Experiments with Watchlist Guidance on Mizar[∗]
Zarathustra Goertzel, Jan Jakub˚uv, and Josef Urban
Czech Technical University, Prague,
**Using Watchlists for Guiding ATPs on Mizar**
The E automated theorem prover [8] has recently gained (thanks to Stephan Schulz) the ability to guide the selection of the given clauses by using the watchlist mechanism (also called
_hints [10] in Prover9 [5]). This mechanism allows E to construct watchlists from lemmas par-_
ticipating in the proofs of related problems to guide clause selection. The talk will discuss
several experiments with using this mechanism over the Mizar40/MPTP dataset [9]. Watchlists have proved essential in the AIM project [4] done with Prover9 for obtaining very long and
advanced proofs. Each inferred clause C is checked for subsumption of the watchlist clauses.
If C subsumes a watchlist clause W, then C is given higher priority, and W is removed from
the watchlist. One way to think of the watchlist of high-frequency clauses is as a toolkit of
mathematical tricks.
We experimented with mining watchlist clauses from 24702 proofs found by E on a benchmark of 57897 Mizar40 [2] problems.[1] The proofs were found by an evolutionarily optimized [1]
ensemble of 32 E strategies (our baseline), each run for 5 s. Each strategy is specified as a
frequency-weighted combination of parametrized clause evaluation functions (CEF) combined
with a selection of inference rules. Below we show a simplified example strategy specifying
the term ordering KBO, and combining (with weights 2 and 4) two CEFs made up of weight
functions Clauseweight and FIFOWeight and priority functions DeferSOS and PreferProcessed.
```
-tKBO -H(2*Clauseweight(DeferSoS,20,9999,4),4*FIFOWeight(PreferProcessed))
```
**Watchlist Selection**
In AIM, using all previous proof lemmas as a watchlist is often a method that works. Problems in
large ITP libraries such as Mizar/MML [6] however differ much more than the AIM problems,
making it more likely for unrelated watchlist lemmas to mislead the proof search. We have
experimented with the following methods for watchlist creation:
1. Initially, all 100, 000+ clauses were used. This slows E down to 6 given clauses per second.
2. Watchlists were constructed on a per Mizar article basis. The size ranges from 0 to 4000.
3. We also order proof clauses by frequency and test watchlist sizes using one watchlist for
a strategy or on an article basis.
4. Last, k-NN learning is used to suggest useful clauses based on symbol and term-based
features [3], that is symbols, walks of length 2 on formula trees and common subterms
(with variables and skolem symbols equalized).
**Using Watchlist in E Strategies**
Watchlist subsumption defines a particular priority function [8] called PreferWatchlist, assigning weights to clauses. We test several ways how to use this priority function:
1. E prover has a default strategy evolved (genetically [7]) for watchlist use. (EVO)
_∗Supported by the ERC Consolidator grant no. 649043 AI4REASON._
1Precisely, we have used the small (bushy, re-proving) versions, but without ATP minimization.
-----
Watchlist Experiments for E on Mizar Goertzel
2. Add a CEF based on PreferWatchlist with high frequency to the 32 strategies. (uwl n)
```
-H(n*6*Defaultweight(PreferWatchlist),2*Clauseweight(DeferSoS,20,9999,4),...)
```
3. Instead of Defaultweight in uwl 1, use the CEFs used in EVO. (uwl evo)
4. Replace all priority functions in a strategy with PreferWatchlist. (pref)
```
-H(2*Clauseweight(PreferWatchlist,20,9999,4),4*FIFOWeight(PreferWatchlist))
```
5. Modify E to always prefer watchlist clauses and default to the given strategy. (uwl)
6. For all above strategies we add a “no-remove” option to keep the subsumed watchlist
clause in the watchlist, thus allowing its repeated subsumption with different clauses.
**Watchlist Performance**
|Strategy 02 08 09 26 28 total|baseline EVO uwl evo pref uwl 14223 17419 16485 17822 14762 14498 16790 16286 15472 15089 11917 13758 13327 13852 12382 12504 16478 14807 14006 13056 12803 14580 14290 13115 12069 21122 21948 22147 22477 21617|
|---|---|
|Size 10 100 256 512 1000 10000|pref02 pref28 3275 2410 3275 2279 3287 2211 3283 2180 3248 2211 2912 2212|
|---|---|
Strategy baseline EVO uwl evo pref uwl
02 14223 17419 16485 **17822** 14762
08 14498 16790 16286 15472 **15089**
09 11917 13758 13327 **13852** 12382
26 12504 **16478** 14807 14006 13056
28 12803 14580 14290 13115 **12069**
total 21122 21948 22147 22477 21617
Size pref02 pref28
10 3275 2410
100 3275 2279
256 3287 2211
512 3283 2180
1000 3248 2211
10000 2912 2212
Table 1: Left: results on the Mizar40 dataset (57897 problems) using per-article watchlists. The 5
greedily best strategies (in bold) cover 22725 (7.6% more) problems (we call this Greedy1 ). Right: Tests
of the watchlist size influence (ordered by frequency) on a random sample of 10000 problems using the
”no-remove” option. Pref28 uses per-article watchlists, and pref02 uses one common watchlist.
For testing, we use five greedily best E strategies covering 80% (21122) of the 24702 proofs.
Table 1 shows the first results. The watchlists are always constructed from the proofs found by
the baseline strategy. The 5 pref strategies prove together 1503 problems that the corresponding
5 baseline strategies don’t, and 514 problems on top of the 24702 found by all 32 baseline
strategies. The good performance of single watchlist strategies may however be also due to
memorization of the baseline proofs. For a fairer comparison of the individual strategies, we
create a random test set of 2000 problems and only construct watchlists from the proofs found
by baseline strategies on the remaining training set of 55897 problems, see Table 2.
baseline pref baseline uwl pref greedy1 greedy2
726 733 728 745 732 755
gain: 0.9% 0.2% 2.6% 0.8% 4%
Table 2: Test performance of the 5 strategies and their gain over the baseline. Pref baseline is pref run
with empty watchlist. Greedy1 is the 5-cover found in Table 1, and greedy2 is the new greedy 5-cover.
The individual performance becomes more realistic: greedy1 falls from 7.6% to under 1%.
_Pref is 2.6% better than the baseline, and 1.6% better than itself without any watchlist. The_
_Greedy2 cover includes pref02, pref baseline09, baseline26, EVO26, and baseline28 strategy._
Surprisingly, pref baseline performs better than the baseline. This means that the weight functions in CEFs often perform better without the priority function input.
Altogether, the watchlist feature helps E prove at least 4% more of the Mizar40 problems
than the baseline ensemble. Inspection of some of the watchlist-based proofs shows that some
are completely new, extending the set of 32524 ATP proofs found with high time limits in [2].
An example is BCIALG 4:44[2] where a nontrivial 30-line Mizar proof is obtained by E using 23
axioms, 200 proof steps and 7067 given clause loops. Experiments suggest the watchlist can
both guide and distract E, so a schedule may include both watchlist and no-watchlist runs.
Consecutive runs also seem feasible. Frequency sorted smaller watchlists appear to significantly
help. More work has to be done on the best way to learn from prior proofs with the watchlist.
[2http://grid01.ciirc.cvut.cz/~mptp/7.13.01_4.181.1147/html/bcialg_4.html#T44](http://grid01.ciirc.cvut.cz/~mptp/7.13.01_4.181.1147/html/bcialg_4.html#T44)
-----
Watchlist Experiments for E on Mizar Goertzel
#### References
[1] Jan Jakubuv and Josef Urban. BliStrTune: hierarchical invention of theorem proving strategies.
In Yves Bertot and Viktor Vafeiadis, editors, Proceedings of the 6th ACM SIGPLAN Conference
_on Certified Programs and Proofs, CPP 2017, Paris, France, January 16-17, 2017, pages 43–52._
ACM, 2017.
[2] Cezary Kaliszyk and Josef Urban. MizAR 40 for Mizar 40. J. Autom. Reasoning, 55(3):245–256,
2015.
[3] Cezary Kaliszyk, Josef Urban, and Jiˇr´ı Vyskoˇcil. Efficient semantic features for automated reasoning over large theories. In Qiang Yang and Michael Wooldridge, editors, IJCAI’15, pages
3084–3090. AAAI Press, 2015.
[4] Michael K. Kinyon, Robert Veroff, and Petr Vojtechovsk´y. Loops with abelian inner mapping
groups: An application of automated deduction. In Maria Paola Bonacina and Mark E. Stickel,
editors, Automated Reasoning and Mathematics - Essays in Memory of William W. McCune,
volume 7788 of LNCS, pages 151–164. Springer, 2013.
[[5] William McCune. Prover9 and Mace4. http://www.cs.unm.edu/~mccune/prover9/, 2005–2010.](http://www.cs.unm.edu/~mccune/prover9/)
[[6] The Mizar Mathematical Library. http://mizar.org/.](http://mizar.org/)
[7] Simon Sch¨afer and Stephan Schulz. Breeding theorem proving heuristics with genetic algorithms.
In Georg Gottlob, Geoff Sutcliffe, and Andrei Voronkov, editors, Global Conference on Artificial
_Intelligence, GCAI 2015, Tbilisi, Georgia, October 16-19, 2015, volume 36 of EPiC Series in_
_Computing, pages 263–274. EasyChair, 2015._
[8] Stephan Schulz. System description: E 1.8. In Kenneth L. McMillan, Aart Middeldorp, and
Andrei Voronkov, editors, LPAR, volume 8312 of LNCS, pages 735–743. Springer, 2013.
[9] Josef Urban. MPTP 0.2: Design, implementation, and initial experiments. J. Autom. Reasoning,
37(1-2):21–43, 2006.
[10] Robert Veroff. Using hints to increase the effectiveness of an automated reasoning program: Case
studies. J. Autom. Reasoning, 16(3):223–239, 1996.
-----
### Guiding SMT Solvers with Monte Carlo Tree Search and Neural Networks
Stéphane Graham-Lengrand[1], Michael Färber[2]
1
CNRS - École Polytechnique, 91120 Palaiseau, France
```
[email protected]
```
2
Universität Innsbruck, 6020 Innsbruck, Austria
```
[email protected]
```
**Abstract**
Monte Carlo Tree Search (MCTS) is a technique to guide search in a large decision space by taking
random samples and evaluating their outcome. Frequently, MCTS is employed together with reward
heuristics learnt by neural networks. The talk will propose a guidance mechanism for SMT solvers
based on a combination of MCTS and neural networks.
Machine learning methods gain importance in automated reasoning. A particularly strong
trend are neural networks, having produced state-of-the-art results for premise selection
[WTWD17, ISA[+]16]. Outside of automated reasoning, neural networks have been combined
with Monte Carlo Tree Search, treating problems as diverse as finding good strategies to play the
game of Go [SHM[+]16] and planning of chemical syntheses [SKTW17, SPW17]. In automated
reasoning, Monte Carlo Tree Search (MCTS) has been applied to first-order automated theorem
proving, using hand-crafted heuristics instead of neural networks [FKU17]. We propose a
combination of Monte Carlo Tree Search and neural networks to guide the search performed by
an SMT solver.
We are exploring the idea of such guidance in the Psyche platform [GL13], which offers a
modular architecture for theorem proving. It implements an adaptation, to automated reasoning
in general and to SMT solving in particular, of the LCF architecture [Mil79, GMW79].
LCF is mostly used in Interactive Theorem Proving and is particualrly widely implemented
in the proof assistants of the HOL family, such as the HOL system [HOL], Isabelle [Isa], etc.
The LCF architecture allows theorem proving strategies to be programmable, while guaranteeing
the correctness of any claim that a formula is provable. The architecture’s kernel component
offers an API whose primitives implement basic reasoning inferences, and strategies can be
programmed on top of the kernel via the API. The claims of provability are then necessarily
_correct-by-construction, assuming the correctness of the kernel, but regardless of any potential_
defects in the design or in the implementation of strategies (or in the user’s input, for the case
of Interactive Theorem Proving).
_Psyche embraces this paradigm and, in the case of SMT solving, relates to a position_
paper by de Moura and Passmore, entitled “The strategy challenge in SMT solving” [dMP13],
which promoted the programmability of strategies as compositions of basic reasoning tasks,
explicitly referring to the LCF paradigm. This approach opens up the possibilities of extensively
experimenting with various strategies, whether they be handcrafted or machine-learned, while
never jeopardising the correctness of the solver’s output. Guiding the search by techniques
such as MCTS and neural networks can be envisaged more easily in provers whose architecture
implements this approach. Psyche’s architecture does so at a rather fine-grained level, separating
the code that implements reasoning inferences from the code that implements search strategies.
More precisely, the CDSAT branch of Psyche [CDS] implements the Conflict-Driven Satisfiability
-----
Guiding SMT Solvers with MCTS and Neural Networks S. Graham-Lengrand, M. Färber
framework [BGS17, BGLS18], which lifts from Boolean logic to generic theory combination the
conflict-driven clause learning (CDCL) algorithm used in pure SAT-solving:
Given an input SMT problem, the search space explored by Psyche/CDSAT consists of states
that describe specifications for a desired model of the input problem. The moves or actions
that can be made from such a state consist of assigning a value to a term or a literal, thereby
specifying the model further, until the existence or non-existence of a model satisfying those
specifications is manifest. In the former case, the input problem is concluded to be SAT. The
latter case represents a conflict, which is analysed so that a lemma can be learnt explaining
the reason for the conflict. Some of the assignments are reverted so that another area of the
search space can be explored, taking into account the learnt lemmas. If and when these lemmas
conclude that no model will be found in the entire search space, the problem is concluded to
be UNSAT. Psyche/CDSAT is modular in the collection of agents that contribute background
knowledge about different theories such as propositional logic and linear arithmetic. These
agents offer for each state a range of possible moves, e.g. assigning a truth value to a literal
or a rational value to a rational variable, and apply theory-specific inference rules in order to
compute consequences of such assignments and detect conflicts.
We propose to apply MCTS guidance for applying moves, which requires a transition
probability heuristic for the moves available from a state, and a reward heuristic for states. The
former quickly orients the search towards the next states to look at, while the latter, possibly
more costly to compute but called less often, contributes to maintaining and updating reward
scores for states. These scores are then used to determine whether the search should explore
more deeply an area of the search space or whether it should jump to another area. A specificity
of satisfiabilty solving, when expressed as a tree-search problem, is that there are two kinds of
conclusions, namely SAT and UNSAT, which may impact what the MCTS heuristics try to
achieve, particularly with respect to the exploitation/exploration balance of an MCTS search.
We are investigating the use, for transition probabilities, of existing theory-agnostic heuristics
for choosing assignments, usually based on the activity score of terms and literals. Those that
have often participated to recent conflicts have a high activity [MMZ[+]01] and will be picked
with higher probability. This encourages exploitation, triggering the use of recently used lemmas
and possibly combining them into a proof of UNSAT.
We propose on the other hand to use, for the reward heuristic, an estimation of proximity
between the state to be evaluated and a SAT state / model. This estimation lends itself to
being learnt by a neural network, trained on previously completed runs. We propose to trigger
this evaluation for conflict states, which comprise a trail of assignments, the lemma it generates,
and previous lemmas present at the time of conflict. All of these need to be embedded to a
feature vector that is tractable by a neural network, using similar methods as [WTWD17] or
[JU17]. Training data for the neural network can be generated by feeding actual SAT states
to it, labelling them with maximal reward, as well as feeding it conflict states, labelling them
e.g. with the Hamming distance between the conflict state and the actual SAT state.
One of the reasons why we believe that Psyche lends itself to this approach is that the basic
inferences and the search space are well-identified. Moreover, the prover’s states are persistent
data-structures, inherited from the functional programming nature of the LCF approach, which
should simplify the recording of the states’ rewards and allow quick state switches during
exploration.
At the moment, two components of the proposed approach have been integrated to Psy_che/CDSAT, which is written in OCaml. First, the OCaml code for MCTS, which was originally_
developed for connection tableaux [FKU17], but which is sufficiently modular to be applicable
to other tree search problems. Second, the OCaml bindings for TensorFlow, which can train
-----
Guiding SMT Solvers with MCTS and Neural Networks S. Graham-Lengrand, M. Färber
and apply a neural net directly in Psyche. What is left to do before evaluating the approach
with benchmarks is to encode the feature extraction and organise the training on a suitable set
of examples.
#### References
[BGLS18] Maria Paola Bonacina, Stéphane Graham-Lengrand, and Natarajan Shankar. Proofs
in conflict-driven theory combination. In June Andronick and Amy Felty, editors,
_Proc. of the 7th Int. Conf. on Certified Programs and Proofs (CPP’18). ACM Press,_
January 2018.
[BGS17] Maria Paola Bonacina, Stéphane Graham-Lengrand, and Natarajan Shankar. Satisfiability modulo theories and assignments. In de Moura [dM17], pages 42–59.
[[CDS] The CDSAT system. Available at https://github.com/disteph/cdsat.](https://github.com/disteph/cdsat)
[dM17] Leonardo de Moura, editor. CADE-26, volume 10395 of LNCS. Springer, 2017.
[dMP13] Leonardo Mendonça de Moura and Grant Olney Passmore. The strategy challenge
in SMT solving. In Maria Paola Bonacina and Mark E. Stickel, editors, Automated
_Reasoning and Mathematics - Essays in Memory of William W. McCune, volume_
7788, pages 15–44, 2013.
[FKU17] Michael Färber, Cezary Kaliszyk, and Josef Urban. Monte Carlo tableau proof
search. In de Moura [dM17], pages 563–579.
[GL13] Stéphane Graham-Lengrand. Psyche: a proof-search engine based on sequent
calculus with an LCF-style architecture. In Didier Galmiche and Dominique
Larchey-Wendling, editors, Proc. of the 22nd Int. Conf. on Automated Reasoning
_with Analytic Tableaux and Related Methods (Tableaux’13), volume 8123 of LNCS,_
pages 149–156. Springer-Verlag, September 2013.
[GMW79] Michael Gordon, Robin Milner, and Christopher Wadsworth. Edinburgh LCF: a
_mechanized logic of computation, volume 78. 1979._
[HOL] The HOL system.
[Isa] The Isabelle theorem prover.
[ISA[+]16] Geoffrey Irving, Christian Szegedy, Alexander A. Alemi, Niklas Eén, François
Chollet, and Josef Urban. DeepMath - deep sequence models for premise selection.
In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and
Roman Garnett, editors, NIPS, pages 2235–2243, 2016.
[JU17] Jan Jakubuv and Josef Urban. ENIGMA: efficient learning-based inference guiding
machine. In Herman Geuvers, Matthew England, Osman Hasan, Florian Rabe,
and Olaf Teschke, editors, CICM, volume 10383 of LNCS, pages 292–302. Springer,
2017.
[Mil79] Robin Milner. LCF: A way of doing proofs with a machine. In Jirí Becvár, editor,
_Proc. of the the 8th Int. Symp. on Mathematical Foundations of Computer Science,_
volume 74 of LNCS, pages 146–159. Springer-Verlag, 1979.
-----
Guiding SMT Solvers with MCTS and Neural Networks S. Graham-Lengrand, M. Färber
[MMZ[+]01] Matthew W. Moskewicz, Conor F. Madigan, Ying Zhao, Lintao Zhang, and Sharad
Malik. Chaff: Engineering an efficient SAT solver. In DAC, pages 530–535, New
York, NY, USA, 2001.
[SHM[+]16] David Silver, Aja Huang, Christopher J. Maddison, Arthur Guez, Laurent Sifre,
George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu,
Thore Graepel, and Demis Hassabis. Mastering the game of Go with deep neural
networks and tree search. Nature, 529:484–503, 2016.
[SKTW17] Marwin H. S. Segler, Thierry Kogej, Christian Tyrchan, and Mark P. Waller.
Generating focussed molecule libraries for drug discovery with recurrent neural
networks. CoRR, abs/1701.01329, 2017.
[SPW17] Marwin H. S. Segler, Mike Preuss, and Mark P. Waller. Learning to plan chemical
syntheses. CoRR, abs/1708.04202, 2017.
[WTWD17] Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise selection for theorem
proving by deep graph embedding. In Isabelle Guyon, Ulrike von Luxburg, Samy
Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman
Garnett, editors, Advances in Neural Information Processing Systems 30: Annual
_Conference on Neural Information Processing Systems 2017, 4-9 December 2017,_
_Long Beach, CA, USA, pages 2783–2793, 2017._
-----
#### LET'S MAKE SET THEORY GREAT AGAIN!
John Harrison
Amazon
Abstract. Although set theory has long been regarded as the "standard" foundation for mathematics, it's somewhat underrepresented in
current activities in the formalization of mathematics. I'll be discussing
how this situation came about and then suggest that it's time to take
another look at set theory as a foundation. I'll talk a bit about how
set theory, particularly with a few natural conventions, can give quite
a clean and direct formalization of much of elementary mathematics, in
particular aspects that are dicult or ugly in type theory like subtypes
and notions of "undened". On the other hand I'll also discuss some of
the possible problems. The link to AITP topics is tenuous but real: I
believe that set theoretic foundations can have a decluttering eect that
makes the correspondence with everyday mathematics more direct and
hence more susceptible to learning.
-----
### Automation by Analogy, in Coq
Alasdair Hill[1][∗]and Ekaterina Komendantskaya[2]
1 Heriot-Watt University Edinburgh, UK
```
[email protected]
```
2 Heriot-Watt University Edinburgh, UK
```
[email protected]
```
**Abstract**
Automation of popular interactive theorem provers like Coq has become a hot topic,
due to the growing number and size of verification projects such provers accommodate.
Several automation tools for Coq have been suggested, from advanced tactics like Crush,
to SMT-solving solutions like SMT-Coq and Hammer for Coq. In this talk, we will present a
complementary set of automation methods for Coq, based on discovery of proof similarities
and common proof patterns in the code.
#### Extended Abstract
The problem of automating (or aiding) Coq proof construction is as old as Coq itself. Ltac
tactic language is by a wide margin the most popular Coq automation tool to date. Over the
years it hosted a range of impressive extensions like e.g. SSReflect [6] and Crush [3]. Neither
of these extensions is “AI based”, i.e. neither uses automated reasoning or machine learning.
Seeing the succes of Isabelle/HOL in AI based automation [2], it is not unreasonable to predict
that incorportaion of some kinds of AI based tools in Coq may aid to further automate some
aspects of proof development. The main two questions are: (1) what kinds of AI tools? and
(2) which aspects of proof development? Very often, the answer to question (1) determines the
answer to question (2).
For example, one answer to question (1) is to encorporate powerful SMT solvers into Coq,
thus aiming for automation of Coq proofs that correspond to the first-order theory of the
underlying SMT solver. For the phase of translation of proofs from the language of the SMTsolver back to Coq, two engineering solutions are possible. Hammers for Coq [4] approach
suggests to use the “Hammer” methods [2] also employed in Isabelle and HOL, i.e. to reconstruct
the Ltac tactics from SMT proof traces. SMT-Coq [5] approach uses the small scale reflection
to reflect the proofs generated by the SMT-solver back into Coq’s language.
In this talk, we will propose an alternative answer to the questions (1) and (2). We propose to
use a method of statistical pattern-recognition to detect structural similarities among Coq proofs
and definitions. It has been implemented in ML4PG (Machine-Learning for Proof General ) [10,
8]. ML4PG performs a structural analysis of all Coq objects in the given libraries, and discovers
their mutual dependencies and similarities. Based on the discovered patterns, it outputs small
sets (clusters) of similar proofs.
If a theorem of interest belongs to a certain cluster, other lemmas and theorems in that
cluster are deemed to be structurally similar to it, and we can try to reconstruct an Ltac proof
script for a new theorem by analogy with the Ltac scripts of similar proofs in the cluster. Unlike
the SMT-based tools, this method will not be restricted to first-order fragment of Coq proofs,
and it will work similarly for SSReflect or plain Coq proofs. But this method will be limited by
_∗A.Hill is funded by an EPSRC DTA grant._
-----
Automation by Analogy, in Coq Hill and Komendantskaya
the power of the analogical argument. For example, a new theorem may not be provable by
analogy with any other existing theorem, or the analogy may run deeper than any Ltac tactic
combination we may generate. The recent preliminary results [11] showed a big variation in
success of the analogical method, depending on the libraries, ranging from 94% in HoTT Path
library [1], and dropping to 36% in the standard SSReflect library.
In this talk, we will give a detailed experimental study of the power and limitations of the
analogical proof reconstruction in Coq setting. We will show four new prototype tools that
explore the analogies arising from structural similarities of proofs in four different ways, some
involving heuristics such as the automata generation of SEPIA [7]. We compare the performance
of these four new analogical methods on SSReflect, CompCert [12], and CoqHoTT libraries.
A similar study of proof automation by analogy has been done in ACL2 [9].
#### References
[1] S. Awodey, T. Coquand, V. Voevodsky, et al. Homotopy Type Theory: Univalent Foundations
_of Mathematics._ `http://homotopytypetheory.org/book, Institute for Advanced Study, 2013.`
```
https://github.com/HoTT/HoTT/wiki.
```
[2] Jasmin Christian Blanchette, Cezary Kaliszyk, Lawrence C. Paulson, and Josef Urban. Hammering
towards QED. J. Formalized Reasoning, 9(1):101–148, 2016.
[3] Adam Chlipala. Certified Programming with Dependent Types. MIT Press, 2011.
[4] Lukasz Czajka and Cezary Kaliszyk. Goal translation for a hammer for coq (extended abstract). In
_Proceedings First International Workshop on Hammers for Type Theories, HaTT@IJCAR 2016,_
_Coimbra, Portugal, July 1, 2016., volume 210 of EPTCS, pages 13–20, 2016._
[5] Burak Ekici, Alain Mebsout, Cesare Tinelli, Chantal Keller, Guy Katz, Andrew Reynolds, and
Clark W. Barrett. Smtcoq: A plug-in for integrating SMT solvers into coq. In Computer Aided
_Verification - 29th International Conference, CAV 2017, Heidelberg, Germany, July 24-28, 2017,_
_Proceedings, Part II, volume 10427 of Lecture Notes in Computer Science, pages 126–133. Springer,_
2017.
[6] G. Gonthier and A. Mahboubi. An introduction to small scale reflection. Journal of Formalized
_Reasoning, 3(2):95–152, 2010._
[7] Thomas Gransden, Neil Walkinshaw, and Rajeev Raman. SEPIA: search for proofs using inferred
automata. In Automated Deduction - CADE-25 - 25th International Conference on Automated
_Deduction, Berlin, Germany, August 1-7, 2015, Proceedings, volume 9195 of Lecture Notes in_
_Computer Science, pages 246–255. Springer, 2015._
[8] J. Heras and E. Komendantskaya. Recycling Proof Patterns in Coq: Case Studies. _Journal_
_Mathematics in Computer Science, 2014._
[9] J´onathan Heras, Ekaterina Komendantskaya, Moa Johansson, and Ewen Maclean. Proof-pattern
recognition and lemma discovery in ACL2. In Logic for Programming, Artificial Intelligence, and
_Reasoning - 19th International Conference, LPAR-19, Stellenbosch, South Africa, December 14-19,_
_2013. Proceedings, volume 8312 of Lecture Notes in Computer Science, pages 389–406. Springer,_
2013.
[10] E. Komendantskaya et al. Machine Learning for Proof General: interfacing interfaces. Electronic
_Proceedings in Theoretical Computer Science, 118:15–41, 2013._
[11] Ekaterina Komendantskaya and J´onathan Heras. Proof mining with dependent types. In Intelligent
_Computer Mathematics - 10th International Conference, CICM 2017, Edinburgh, UK, July 17-21,_
_2017, Proceedings, volume 10383 of Lecture Notes in Computer Science, pages 303–318. Springer,_
2017.
[12] X. Leroy. Formal verification of a realistic compiler. Communications of the ACM, 52(7):107–115,
2009.
-----
### Enhancing ENIGMA Given Clause Guidance [∗]
Jan Jakub˚uv and Josef Urban
Czech Technical University in Prague, Prague, Czech Republic
#### 1 ENIGMA: Efficient Learning of Given Clause Guidance
State-of-the-art saturation-based automated theorem provers (ATPs) for first-order logic (FOL),
such as E [9], are today’s most advanced tools for general reasoning across a variety of mathematical and scientific domains. Many ATPs employ the given clause algorithm, translating the
input FOL problem T ∪{¬C} into a refutationally equivalent set of clauses. The search for a
contradiction is performed maintaining sets of processed (P ) and unprocessed (U ) clauses. The
algorithm repeatedly selects a given clause g from U, extends U with all clauses inferred with
_g and P_, and moves g to P . This process continues until a contradiction is found, U becomes
empty, or a resource limit is reached. The search space of this loop grows quickly and it is a
well-known fact that the selection of the right given clause is crucial for success.
ENIGMA [5] is an efficient learning-based method for guiding clause selection in saturationbased ATPs. ENIGMA is based on a simple but fast logistic regression algorithm effectively
implemented by the LIBLINEAR open source library [4]. In order to employ logistic regression,
first-order clauses need to be translated to fixed-length numeric feature vectors. ENIGMA
uses (top-down-)oriented term-tree walks of length 3 as features. For example, a unit clause
“P (f (a, b))” contains only features “(P, f, a)” and “(P, f, b)” (see [5, Sec. 3.2] for details).
Features are numbered and a clause C is translated to the feature vector ΦC whose i-th member
counts the number of occurrences of the i-th feature in clause C. In practice, we also count
top-level literal symbols (positive or negative) and we unify variables and Skolem symbols
In order to train an ENIGMA predictor, all the given clauses from a set of previous successful
proof searches are collected. The given clauses used in the proofs are classified as positive and
the remaining given clauses as negative. The clauses are turned into feature vectors and a
LIBLINEAR classifier is trained based on this classification, classifying a clause as useful or
_un-useful for the proof search. The predictor is used to guide next proof searches in combination_
with other E heuristics. Thanks to the fast feature extraction mechanism and the fast (linear)
evaluation of the features in a particular learned model, there is no slowdown of the given
clause loop. In fact, ENIGMA is faster than some of the more advanced hand-programmed
E evaluation heuristics [6]. The training speed allows fast MaLARea-style [12] iterative loop
between ATP proving and re-learning [5, Sec. 5.1].
#### 2 Enhancements
The talk will present several ENIGMA improvements and experiments. First, ENIGMA was
previously tested only on the CASC 2016 AIM benchmark [11] which contains only about 10
different symbols. Using a dense encoding yields feature vectors (and corresponding ENIGMA
models) of size 10[3]. Such exhaustive enumeration of feature vectors gets too big with larger
symbol signatures. It however turns out that the term-tree walks features are relatively sparse:
symbols are typically applied only to a small number of other symbols. Hence we switch
to a sparse encoding, using only the features which actually appear in the training data. This
_∗Supported by the ERC Consolidator grant no. 649043 AI4REASON._
-----
Enhancing ENIGMA Given Clause Predictions with Conjecture Features Jakub˚uv,Urban
_simple_ 84.7 84.6 84.6 84.5 _simple_ 92.2 95.0 90.8 93.9
_50-50_ 76.3 78.0 76.3 77.8 _50-50_ 89.2 91.9 88.8 91.5
|AIM data|train accuracy noconj conj|10-fold cross-val noconj conj|
|---|---|---|
|simple 50-50|84.7 84.6 76.3 78.0|84.6 84.5 76.3 77.8|
|MZR data|train accuracy noconj conj|10-fold cross-val noconj conj|
|---|---|---|
|simple 50-50|92.2 95.0 89.2 91.9|90.8 93.9 88.8 91.5|
Table 1: ENIGMA training and 10-fold cross-validation accuracies (MZR and AIM) (Sec. 3).
significantly reduces the feature vector sizes while preserving the predicting accuracy. This alone
allows us to test ENIGMA on ITP-based benchmarks. An even larger step in this direction
is the use of fast dimension-reduction methods. So far we have efficiently built SVD into E
and are planning to extend this to state-of-the-art fast SVD-based methods that have recently
shown very good performance [8, 2] compared to (slower) neural embeddings.
Second, for AIM we initially did not consider conjectures to be a part of the model. Hence
the same clauses were being recommended in every possible AIM proof search. This can work
(similarly to the hint method [13]) on benchmarks of very related problems (such as AIM), but
hardly on large ITP-based formal libraries, which are much more heterogeneous. To overcome
this, we embed the conjecture features in the feature vectors. For a clause C, instead of using
the vector ΦC of length n, we use a vector (ΦC, ΦG) of length 2n where ΦG contains the features
of the conjecture G. For a training clause C, G corresponds to the conjecture of the proof search
where C was selected as a given clause. When classifying a clause C during the proof search,
_G corresponds to the conjecture currently being proved._
#### 3 Experimental Evaluation
Standard ENIGMA predictors were previously evaluated with a single E strategy on the CASC
2016 AIM benchmark [11]. This was extended to 11 good E strategies [7] and we additionally
evaluate on the CASC 2012 MZR benchmark [10] based on the Mizar MPTP2078 [1] dataset.
Both of these benchmarks provide around 1000 training problems. The MZR problems contain
altogether around 500 different symbols (compared to 10 in AIM), which are used consistently
across different problems. This is crucial for the current symbol-based ENIGMA.[1]
The original (non-conjecture - noconj ) and conjecture-enhanced (conj ) predictor accuracies
are presented in Table 1 both for simple (unbalanced) data and for the 50-50 data with the
positive examples boosted to 50%. The 11 E prover strategies are used to generate the training
data and to build 11 different predictors. We measure the training accuracy where the predictor
is tested on the training data, and the standard 10-fold cross-validation accuracy, where the
data are divided into 10 subsets (folds) with 9 folds used for training and one fold left aside for
evaluation. The results are averaged across the 11 E strategies.
The differences between the training and 10-fold cross-validation accuracies are minimal.
This shows that the relatively simple learner is not overfitting. As expected, adding the conjecture features helps on the MZR data and less on AIM. This is likely because the AIM problems
have more similar conjectures. The training data contain hundreds of thousands clauses (around
110000 on average) and the training times are in seconds (from 5 to 20 on Intel 2.30GHz). The
feature vector sizes vary from 60 to 130 on AIM and from 2300 to 22000 on MZR. A proper
ATP evaluation on MZR is however still future work. It seems that the learned classifiers
still underperform on positive examples. A promising approach is adaptive boosting when we
iteratively build a predictor and boost misclassified positive samples.
1However, this is already less crucial with the SVD-based embeddings, which generalize over the features [3].
-----
Enhancing ENIGMA Given Clause Predictions with Conjecture Features Jakub˚uv,Urban
#### References
[1] Jesse Alama, Tom Heskes, Daniel K¨uhlwein, Evgeni Tsivtsivadze, and Josef Urban. Premise
selection for mathematics by corpus analysis and kernel methods. J. Autom. Reasoning, 52(2):191–
213, 2014.
[2] Sanjeev Arora, Yingyu Liang, and Tengyu Ma. A simple but tough-to-beat baseline for sentence
embeddings. In Proceedings of the International Conference on Learning Representations (ICLR),
2017.
[3] Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A.
Harshman. Indexing by Latent Semantic Analysis. JASIS, 41(6):391–407, 1990.
[4] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin. LIBLINEAR:
A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874, 2008.
[5] Jan Jakub˚uv and Josef Urban. ENIGMA: efficient learning-based inference guiding machine. In
_CICM, volume 10383 of Lecture Notes in Computer Science, pages 292–302. Springer, 2017._
[6] Jan Jakubuv and Josef Urban. Extending E prover with similarity based clause selection strategies.
In Michael Kohlhase, Moa Johansson, Bruce R. Miller, Leonardo de Moura, and Frank Wm.
Tompa, editors, Intelligent Computer Mathematics - 9th International Conference, CICM 2016,
_Bialystok, Poland, July 25-29, 2016, Proceedings, volume 9791 of Lecture Notes in Computer_
_Science, pages 151–156. Springer, 2016._
[7] Jan Jakubuv and Josef Urban. BliStrTune: hierarchical invention of theorem proving strategies.
In Yves Bertot and Viktor Vafeiadis, editors, Proceedings of the 6th ACM SIGPLAN Conference
_on Certified Programs and Proofs, CPP 2017, Paris, France, January 16-17, 2017, pages 43–52._
ACM, 2017.
[8] Jiaqi Mu, Suma Bhat, and Pramod Viswanath. Representing sentences as low-rank subspaces.
_CoRR, abs/1704.05358, 2017._
[9] Stephan Schulz. E - A Brainiac Theorem Prover. AI Commun., 15(2-3):111–126, 2002.
[10] G. Sutcliffe. The 6th IJCAR Automated Theorem Proving System Competition - CASC-J6. AI
_Communications, 26(2):211–223, 2013._
[11] Geoff Sutcliffe. The 8th IJCAR automated theorem proving system competition - CASC-J8. AI
_Commun., 29(5):607–619, 2016._
[12] Josef Urban, Geoff Sutcliffe, Petr Pudl´ak, and Jiˇr´ı Vyskoˇcil. MaLARea SG1 - Machine Learner
for Automated Reasoning with Semantic Guidance. In Alessandro Armando, Peter Baumgartner,
and Gilles Dowek, editors, IJCAR, volume 5195 of LNCS, pages 441–456. Springer, 2008.
[13] Robert Veroff. Using hints to increase the effectiveness of an automated reasoning program: Case
studies. Journal of Automated Reasoning, 16(3):223–239, 1996.
-----
#### Towards Machine Learning for Quantication
Mikolá² Janota
INESC-ID, Lisboa
Abstract. The talk will introduce and discuss QBF (Quantied Boolean
Formulas) solving and some machine learning methods for solving QBF
problems. We will also discuss their extensions to the EPR class and
nite model nding.
-----
### leanCoP Project Description: Reinforcement Learning for
Cezary Kaliszyk, Henryk Michalewski, Piotr Miªo±, Mirek Ol²ák, and Josef Urban
#### 1 Introduction: Guiding leanCoP
The talk will describe our initial work on setting up reinforcement learning (RL) experiments
for guiding the leanCoP prover. leanCoP [OB03] is an automated theorem prover (ATP) implementing connected tableau search with iterative deepening, written very economically in
Prolog by Otten. Given the very compact implementation, leanCoP's performance is surprisingly high, regularly outperforming much larger ATPs such as Metis and even Prover9 in the
CASC competition and in particular on problems coming from large formal libraries [KUV15a].
This has already led to several experiments with using machine learning (ML) methods for
guiding leanCoP's proof search. In particular, MaLeCoP [UVv11] and FEMaLeCoP [KU15a] are
guiding the choice of the extension clause/step by a naive Bayes classier trained on a large
number of pairs of previous proof states and proof steps. The states (current path) and steps
(clauses) are characterized using custom term-based features [KUV15b]. First experiments have
also been done with Monte Carlo guidance [FKU17] of leanCoP. Here we continue this work in
several ways:
_• Use of state-of-the-art ML methods. We want to use more advanced ML methods such_
as gradient boosted trees (e.g. XGBoost [CG16]) and deep neural networks.
_• We export leanCoP as a Python interface usable with more advanced ML/RL methods_
and tools.
_• We do rst experiments with the more advanced ML methods using this interface._
#### 2 Exporting leanCoP as a Python Interface
We start with leanCoP's OCaml reimplementation [KUV15a], extended by the feature extraction and advising mechanisms used in FEMaLeCoP. Part of the OCaml code is exported in
C, and compiled as a shared library loadable from Python. The exported functions include
cop_start(filename) and cop_action(length, array). cop_start takes a problem, clausies it, initializes the prover, and returns the initial state with a list of possible actions (clauses).
cop_action takes a re-ordering (ranking) of the actions (typically provided by the trained classier), applies the best to the current proof state, and returns the next proof state and its
possible actions. The states and possible clauses can be printed in dierent formats, either as
strings, or as lists of standard symbol and term-based features, or as lists of integers encoding
the prex notation introduced in [KCS17] and used by several neural approaches. If a proof is
found, it can be printed as a sequence of positive and negative examples. The positive examples
are pairs of (proof_state,clause) that are steps in the nal proof. Negative examples are
obtained as alternatives (proof_state,clause) to the positive examples, where clause was
applicable to proof_state, but this step did not participate in the nal proof.
The communication overhead introduced by the API is small, as witnessed by the inference
speed of about 50,000 inferences per second on selected MPTP problems. The API is very
simple and one of our plans is to connect it to the OpenAI environment [BCP[+]16]. This would
allow further experiments with various RL and supervised methods.
-----
Reinforcement Learning for leanCoP Kaliszyk, Michalewski, Miªo±, Ol²ák, Urban
Figure 1: Naive Bayes-guided PyCoP progress over time (left) and XGBoost-guided PyCoP progress over time (right)
#### 3 First Experiments
We experiment with the large Mizar40 [KU15c] dataset of ATP-provable (and ATP-minimized)
31250 MPTP [Urb06] problems.[1] We also use a smaller dataset of the 2078 MPTP2078 [AHK[+]14]
Mizar bushy problems.[2] Unguided Python-based leanCoP (PyCoP) solves 11402 of the 31250
Mizar40 problems with a 10 s time limit.
In the rst experiment we test the accuracy of our tree-RNN classier[3] on the positive/negative examples extracted (in the prex representation) from the 11402 proofs found by unguided
PyCoP. The results of the experiment are available online.[4] After 15 epochs, the training
accuracy reaches 97.3% and the validation accuracy is about 87.1%.
We also experiment with XGBoost using the standard feature-based representation of the
data. XGBoost is a high-performance implementation of a tree-based classier which typically
performs well on large sparse datasets like ours with millions of features. Figure 1 compares on
the MPTP2078 benchmark the performance of XGBoost-guided PyCoP with PyCoP guided by
the naive Bayes classier developed in [KU15b]. The naive Bayes version so far outperforms
the XGBoost version, however the XGBoost guidance is approximately 30 times slower, making
about 30-times less steps in the same time. We can also see that XGBoost can still prove new
theorems with higher time limits. With appropriate ensembling, XGBoost-guided PyCoP proves
777 MPTP2078 theorems, which seems to be the current record in the 2-hour timeout heavyweight leanCoP category (PyCoP guided by naive Bayes proves 720 theorems).
Finally, we have done rst tests of a simple (Monte Carlo) unsupervised reinforcement
learning setting using the full Mizar40 dataset and our tree-RNN classier. So far we only
allow 50 PyCoP steps guided by the (Monte-Carlo modied) RNN evaluation, which can get
only very simple proofs in the iterative deepening approach used by leanCoP. After a single
pass and learning from the feedback, the system solves about 5% of the problems compared to
2.5-3% when unguided.
[1https://github.com/JUrban/deepmath](https://github.com/JUrban/deepmath)
[2https://github.com/JUrban/MPTP2078](https://github.com/JUrban/MPTP2078)
[3https://github.com/mirefek/HolStep-Tree](https://github.com/mirefek/HolStep-Tree)
[4http://atrey.karlin.mff.cuni.cz/~mirecek/fm/pycop_supervised.log](http://atrey.karlin.mff.cuni.cz/~mirecek/fm/pycop_supervised.log)
-----
Reinforcement Learning for leanCoP Kaliszyk, Michalewski, Miªo±, Ol²ák, Urban
#### References
[AHK[+]14] Jesse Alama, Tom Heskes, Daniel Kühlwein, Evgeni Tsivtsivadze, and Josef Urban. Premise
selection for mathematics by corpus analysis and kernel methods. J. Autom. Reasoning,
52(2):191213, 2014.
[BCP[+]16] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie
Tang, and Wojciech Zaremba. OpenAI gym. CoRR, abs/1606.01540, 2016.
[CG16] Tianqi Chen and Carlos Guestrin. XGBoost: A scalable tree boosting system. CoRR,
abs/1603.02754, 2016.
[FKU17] Michael Färber, Cezary Kaliszyk, and Josef Urban. Monte Carlo tableau proof search. In
Leonardo de Moura, editor, 26th International Conference on Automated Deduction (CADE
2017), volume 10395 of LNCS, pages 563579. Springer, 2017.
[KCS17] Cezary Kaliszyk, François Chollet, and Christian Szegedy. Holstep: A machine learning
dataset for higher-order logic theorem proving. CoRR, abs/1703.00426, 2017.
[KU15a] Cezary Kaliszyk and Josef Urban. FEMaLeCoP: Fairly ecient machine learning connection prover. In Martin Davis, Ansgar Fehnker, Annabelle McIver, and Andrei Voronkov,
editors, Logic for Programming, Articial Intelligence, and Reasoning - 20th International
Conference, LPAR-20 2015, Suva, Fiji, November 24-28, 2015, Proceedings, volume 9450
of Lecture Notes in Computer Science, pages 8896. Springer, 2015.
[KU15b] Cezary Kaliszyk and Josef Urban. FEMaLeCoP: Fairly ecient machine learning connection prover. In Martin Davis, Ansgar Fehnker, Annabelle McIver, and Andrei Voronkov,
editors, 20th International Conference on Logic for Programming, Articial Intelligence,
and Reasoning (LPAR 2015), volume 9450 of LNCS, pages 8896. Springer, 2015.
[KU15c] Cezary Kaliszyk and Josef Urban. MizAR 40 for Mizar 40. J. Autom. Reasoning, 55(3):245
256, 2015.
[KUV15a] Cezary Kaliszyk, Josef Urban, and Ji°í Vysko£il. Certied connection tableaux proofs for
HOL Light and TPTP. In Xavier Leroy and Alwen Tiu, editors, Proc. of the 4th Conference
on Certied Programs and Proofs (CPP'15), pages 5966. ACM, 2015.
[KUV15b] Cezary Kaliszyk, Josef Urban, and Ji°í Vysko£il. Ecient semantic features for automated
reasoning over large theories. In Qiang Yang and Michael Wooldridge, editors, IJCAI'15,
pages 30843090. AAAI Press, 2015.
[Lai15] Matthew Lai. Girae: Using deep reinforcement learning to play chess. CoRR,
abs/1509.01549, 2015.
[OB03] Jens Otten and Wolfgang Bibel. leanCoP: lean connection-based theorem proving. J. Symb.
Comput., 36(1-2):139161, 2003.
[RGB10] Stéphane Ross, Georey J. Gordon, and J. Andrew Bagnell. No-regret reductions for
imitation learning and structured prediction. CoRR, abs/1011.0686, 2010.
[Urb06] Josef Urban. MPTP 0.2: Design, implementation, and initial experiments. J. Autom.
Reasoning, 37(1-2):2143, 2006.
[UVv11] Josef Urban, Ji°í Vysko£il, and Petr t¥pánek. MaLeCoP: Machine learning connection
prover. In Kai Brünnler and George Metcalfe, editors, TABLEAUX, volume 6793 of LNCS,
pages 263277. Springer, 2011.
-----
### Mizar in Isabelle for Formal Abstracts[∗]
Cezary Kaliszyk[1] and Karol P¡k[2]
1 Department of Computer Science, Universität Innsbruck, Austria
[email protected]
2 Institute of Informatics, University of Biaªystok, Poland
[email protected]
Abstract
One of the main goals of the Mizar project has been to create a formal
system that would be attractive for mathematicians. Various developed features
have therefore became an inspiration for extensions and improvements in other
systems. At the same time, the architecture of Mizar has not been exible
enough to accommodate the solutions developed in other systems. In this talk
we present a combination of the Isabelle a modern logical framework with a
Mizar object logic and argue that it can serve as an attractive environment for
formal mathematics. Indeed, the Mizar foundations are a variant of set theory,
which is familiar for mathematicians. Its proof style does correspond to natural
proofs. And the type system was designed to correspond to how mathematicians
classify objects. We will nally discuss the various mechanisms that allow for
grater usability for mathematical statements and proofs.
The Mizar project [1] from its beginning aimed to make a system for human readable formalization of mathematics. Many aspects distinguish it from other proof assistants. Its proof style
imitates informal mathematical proofs. Its type system tries to express how mathematicians
use and categorize mathematical objects. Combining these two features provide a more intuitive environment for formalized mathematics than other systems [8]. Furthermore, formalized
Mizar results have been gathered in the Mizar Mathematical Library (MML). Its focus is on
mathematical results, which makes it complementary to the libraries of various other proof
assistants, including results that have not been formalized in other systems, such as the theory
of lattices, topological manifolds, and random access Turing machines.
In this talk we will present a combination of a modern logical framework Isabelle with
the foundations and the proof style of Mizar and discuss the usability of the resulting proof
environment for the development of formal mathematical abstracts. Our Mizar emulator [7]
created in the Isabelle logical framework provides selected constructs from the Mizar language
[5]. We imitate the type system including intersection types and structures [6], as well as
higher-order concepts, such as set comprehensions and schemes [4].
We will argue that the environment is more convenient for stating and formalizing formal
mathematical statements. It is possible to naturally state mathematical object classications,
for example: X is n-dimensional topological-manifold is a compact Mizar-like statement that is
clear to a mathematician, but stating it in many systems would require unnatural constructions. We show that it will be possible import and cross-verify the whole MML with all its
mathematical results. We will present various manually re-formalized Mizar theorems including
results from set theory, algebra and random access Turing machines. We will also discuss the
Mizar level of human readability.
We discuss our foundations of the Mizar system as an object logic in the Isabelle logical
framework, especially the number of needed constants and axioms. Then we focus on faithful
_∗The paper has been nanced by the resources of the Polish National Science Centre granted by decision no_
DEC-2015/19/D/ST6/01473
-----
Mizar in Isabelle for Formal Abstracts Kaliszyk, P¡k
imitation the Mizar denitional mechanisms. We show adequate mechanisms for each kind
of these denition including the Mizar structures that allows multiple inhabitance. Finally
we show also an experimental mechanism that provides selected Mizar type information in
justications of proof steps.
Various extensions of other proof assistants that imitate the Mizar language have been
proposed. Examples include the Mizar Mode for HOL [11] and the Isar language for Isabelle [10].
These are however limited to a few rules the Ja±kowski natural deduction style [3], and omit
other crucial parts of mathematical text, such as Mizar denitions of more complex objects that
require Mizar-style justication of correctness. Similarly, there have been a number of attempts
to translate the Mizar logic to various other formalisms. Urban [9] exported the MML to the
TPTP rst-order language, and with Brown this was extended to higher-order logic [2]. Such
approaches try to preserve the semantics of Mizar, however do not preserve any of the user
commands or notations. In consequence such translations signicantly reduce proof readability,
so important for formal proof abstracts.
[1] G. Bancerek, C. Byli«ski, A. Grabowski, A. Korniªowicz, R. Matuszewski, A. Naumowicz, K. P¡k,
and J. Urban. Mizar: State-of-the-art and Beyond. In M. Kerber, J. Carette, C. Kaliszyk, F. Rabe,
and V. Sorge, editors, Intelligent Computer Mathematics - International Conference, CICM 2015,
volume 9150 of LNCS, pages 261279. Springer, 2015.
[2] C. E. Brown and J. Urban. Extracting higher-order goals from the Mizar Mathematical Library.
In M. Kohlhase, M. Johansson, B. R. Miller, L. de Moura, and F. W. Tompa, editors, Proc.
9th International Conference on Intelligent Computer Mathematics (CICM 2016), volume 9791 of
LNCS, pages 99114. Springer, 2016.
[3] S. Ja±kowski. On the rules of suppositions. Studia Logica, 1, 1934.
[4] C. Kaliszyk and K. P¡k. Isabelle formalization of set theoretic structures and set comprehensions.
In J. Blamer, T. Kutsia, and D. Simos, editors, 7th International Conference on Mathematical
Aspects of Computer and Information Sciences, MACIS 2017, volume 10693 of LNCS. Springer,
2017.
[5] C. Kaliszyk and K. P¡k. Presentation and manipulation of Mizar properties in an Isabelle object
logic. In H. Geuvers, M. England, O. Hasan, F. Rabe, and O. Teschke, editors, Intelligent Computer
Mathematics - 10th International Conference, CICM 2017, Edinburgh, UK, July 17-21, 2017,
Proceedings, volume 10383 of LNCS, pages 193207. Springer, 2017.
[6] C. Kaliszyk and K. P¡k. Progress in the independent certication of Mizar Mathematical Library
in Isabelle. In M. Ganzha, L. A. Maciaszek, and M. Paprzycki, editors, Proceedings of the 2017
Federated Conference on Computer Science and Information Systems, FedCSIS 2017, pages 227
236, 2017.
[7] C. Kaliszyk, K. P¡k, and J. Urban. Towards a Mizar environment for Isabelle: Foundations and
language. In J. Avigad and A. Chlipala, editors, Proc. 5th Conference on Certied Programs and
Proofs (CPP 2016), pages 5865. ACM, 2016.
[8] A. Trybulec, A. Korniªowicz, A. Naumowicz, and K. T. Kuperberg. Formal mathematics for
mathematicians - special issue. J. Autom. Reasoning, 50(2):119121, 2013.
[9] J. Urban. MPTP 0.2: Design, implementation, and initial experiments. J. Autom. Reasoning,
37(12):2143, 2006.
[10] M. Wenzel. Isar - A generic interpretative approach to readable formal proof documents. In
Y. Bertot, G. Dowek, A. Hirschowitz, C. Paulin, and L. Théry, editors, Theorem Proving in
Higher Order Logics, 12th International Conference, TPHOLs'99, volume 1690 of LNCS, pages
167184. Springer, 1999.
[11] F. Wiedijk. A synthesis of the procedural and declarative styles of interactive theorem proving.
Logical Methods in Computer Science, 8(1), 2012.
-----
### Mechanizing Principia Logico-Metaphysica in Functional Type Theory
Daniel Kirchner[1], Christoph Benzm¨uller[12], and Edward N. Zalta[3]
1 Freie Universit¨at Berlin
2 University of Luxembourg
3 Stanford University
_Principia Logico-Metaphysica (PLM) [14] aims at a foundational logical theory for meta-_
physics, mathematics and the sciences. It contains a canonical presentation of Abstract Object
Theory (AOT) [15, 16], which distinguishes between abstract and ordinary objects, in the tradition of the work of Mally [7]. The theory systematizes two fundamental kinds of predication:
classical exemplification for ordinary and abstract objects, and encoding for abstract objects.
The latter is a new kind of predication that provides AOT with expressive power beyond that
of quantified second-order modal logic, and this enables elegant formalizations of various metaphysical objects, including the objects presupposed by mathematics and the sciences. More
generally, the system offers a universal logical theory that may have a greater capability of
accurately representing the contents of human thought than other foundational systems.
Independently, the use of shallow semantical embeddings (SSEs) of complex logical systems
in classical higher-order logic (HOL) has shown great potential as a metalogical approach towards universal logical reasoning [1]. The SSE approach aims to unify logical reasoning by
using HOL as a universal metalogic. Only the distinctive primitives of a target logic are represented in the metalogic using their semantical definitions (hence the shallow embedding), while
the rest of the target system is captured by the existing infrastructure of HOL. For example,
quantified modal logic can be encoded by representing propositions as sets of possible worlds
and by representing the connectives, quantifiers, and modal operators as operations on those
sets. This way the world-dependency of Kripke-style semantics can be elegantly represented
in HOL. Utilizing the powerful options to handle and hide such definitions that are offered in
modern proof assistants such as Isabelle/HOL [10], a human-friendly mechanization of even
most challenging target logics, including the AOT, can thus be obtained.
AOT and the SSE approach are rather orthogonal. They have very different motivations
and come with fundamentally different foundational assumptions. AOT uses a hyperintensional
_second-order modal logic, grounded on a relational type theory, as its foundation. It is in the_
tradition of Russell and Whitehead’s Principia Mathematica [11, 8], which takes the notion of
_relation as primitive and defines the notion of function in terms of relations. The metalogic_
HOL in the SSE approach, by contrast, is fully extensional, and defined on top of a functional
type theory in the tradition of the work of Frege [6] and Church [4]. It takes the notion of
(fully extensional) function as primitive and defines the notion of relation in terms of functions.
These fundamentally different and, to some extent, antagonistic roots in turn impose different
requirements on the corresponding frameworks, in particular, with regard to the comprehension
principles that assert the existence of relations and functions. Devising a mapping between
the two formalisms has, unsurprisingly, been identified as a non-trivial, practical challenge by
Oppenheimer and Zalta [12].
The work reported here tackles this challenge. Further details can be found in Kirchner’s
M.A. thesis [9], where the SSE approach is utilized to mechanize and analyze AOT in HOL.
Kirchner constructed a shallow semantical embedding of the second-order modal fragment of
AOT in HOL, and this embedding was subsequently represented in the proof assistant system
-----
Mechanizing Principia Logico-Metaphysica Kirchner, Benzm¨uller and Zalta
Isabelle/HOL. The proof assistant system enabled us to conduct experiments in the spirit of a
_computational metaphysics, with fruitful results that have helped to advance the ideas of AOT._
The inspiration for Kirchner’s embedding comes from the model for AOT proposed by
Peter Aczel.[1] Kirchner also benefited from Benzm¨uller’s initial attempts to embed AOT in Isabelle/HOL. An important goal of the research was to avoid artifactual theorems, i.e., theorems
that (a) are derivable on the basis of special facts about the Aczel model that was used to embed
AOT in Isabelle/HOL, but (b) aren’t theorems of AOT. In previous applications of the SSE
approach, this issue didn’t arise. For example, in the context of the analysis of G¨odel’s modal
ontological argument for the existence of God (cf. [2]), extensive results about the Kripke models were available a priori. But AOT is, in part, a body of theorems, and so care has been taken
not to derive artifactual theorems about the Aczel model that are not theorems of AOT itself.
This explains why the embedding of AOT in Isabelle/HOL involves several layers of abstraction. In the Aczel model of AOT that serves as a starting point, abstract objects are modeled
as sets of properties, where properties are themselves modeled as sets of urelements. Once the
axioms of AOT are derived from the shallow semantic embedding of AOT in HOL, a controlled
and suitably constricted logic layer is defined: by reconstructing the inference principles of AOT
in the system that derives the axioms of AOT, only the theorems of AOT become derivable. By
utilizing Isabelle/HOL’s sophisticated support tools for interactive and automated proof development at this highest level of the embedding, it became straightforward to map the pen and
paper proofs of PLM into corresponding, intuitive, and user-friendly proofs in Isabelle/HOL.
In nearly all cases this mapping is roughly one-to-one, and in several cases the computer proofs
are even shorter. In other words, the de Bruijn factor [13] of this work is close to 1. In addition, the layered construction of the embedding has enabled a detailed, experimental analysis
in Isabelle/HOL of the underlying Aczel model and the semantical properties of AOT.
As an unexpected, but key result of this study, we discovered that if the classical logic for λexpressions and definite descriptions is adjoined to AOT’s specially-formulated comprehension
principle for relations without taking any special precautions, a known paradox (‘the ClarkBoolos paradox’ [5, 3]) that had been successfully put to rest is reintroduced.[2] Since the
complex terms add significant expressive and analytic power to AOT, and play a role in many
of its more interesting theorems, the re-emergence of the known paradox has become a new
paradox that has to be addressed. In the ongoing attempts to find an elegant formulation
of AOT that avoids the new paradox, the computational representation in Isabelle/HOL now
provides a testing infrastructure and serves as an invaluable aid for analyzing various conjectures
and hypothetical solutions to the problem. This illustrates the very idea of computational
_metaphysics: humans and machines team up and split the tedious work in proportion to their_
cognitive and computational strengths and competencies. And as intended, the results we
achieved reconfirm the practical relevance of the SSE approach to universal logical reasoning.
1An earlier model for the theory was proposed by Dana Scott. His model is equivalent to a special case of
an Aczel model with only one special urelement.
2The Clark-Boolos paradox of encoding arises if there are properties of the form [λ∃F (xF &¬Fx] (encoding a
_property that is not exemplified). If such properties were to exist, one could, by object comprehension, generate_
an abstract that encodes such a property. And such an object would exemplify this property if and only if it
does not. To address the paradox, object theory disallows encoding subformulas in the matrix of λ-expressions.
This paradox gets reintroduced, however, if care isn’t taken to disallow λ-expressions with definite descriptions
from β-Conversion. Object theory considers the term [λx Gιzψ] as well-formed, even if ψ contains encoding
subformulas, since these latter are not subformulas of the full matrix. So by choosing G to be a property that
is universally true (e.g., [λy ∀p(p → _p)]) and ψ as z =_ _x ∧∃F (xF ∧¬Fx), the result is a property that, by β-_
Conversion, is extensionally equivalent to the property that led to the Clark-Boolos paradox. The restriction
on β-Conversion, which had been a part of earlier versions of object theory, was originally omitted from PLM,
though as we now know, at the peril of the system.
-----
Mechanizing Principia Logico-Metaphysica Kirchner, Benzm¨uller and Zalta
#### References
[1] Christoph Benzm¨uller. Universal Reasoning, Rational Argumentation and Human-Machine Interaction. CoRR, abs/1703.09620, 2017.
[2] Christoph Benzm¨uller and Bruno Woltzenlogel Paleo. Automating G¨odel’s Ontological Proof of
God’s Existence with Higher-order Automated Theorem Provers. In Torsten Schaub, Gerhard
Friedrich, and Barry O’Sullivan, editors, ECAI 2014, volume 263 of Frontiers in Artificial Intelli_gence and Applications, pages 93–98. IOS Press, 2014._
[3] George Boolos. The Consistency of Freges Foundations of Arithmetic. In J. Thomson, editor, On
_Being and Saying. Cambridge, MA: MIT Press, spring 2017 edition, 1987._
[4] Alonzo Church. A Formulation of the Simple Theory of Types. Journal of Symbolic Logic, 5(2):56–
68, 1940.
[5] Romane Clark. Not Every Object of Thought has Being: A Paradox in Naive Predication Theory.
_Noˆus, 12(2):181–188, 1978._
[6] Gottlob Frege. Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen
_Denkens. Verlag von Louis Nebert, Halle, 1879._
[7] Alexander Hieke and Gerhard Zecha. Ernst Mally. In Edward N. Zalta, editor, The Stanford
_Encyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2016 edition,_
2016.
[8] Andrew David Irvine. Principia Mathematica. In Edward N. Zalta, editor, The Stanford En_cyclopedia of Philosophy. Metaphysics Research Lab, Stanford University, winter 2016 edition,_
2016.
[9] Daniel Kirchner. Representation and Partial Automation of the Principia Logico-Metaphysica in
[Isabelle/HOL. Archive of Formal Proofs, September 2017. http://isa-afp.org/entries/PLM.](http://isa-afp.org/entries/PLM.html)
```
html, Formal proof development.
```
[10] Tobias Nipkow, Lawrence C. Paulson, and Markus Wenzel. Isabelle/HOL — A Proof Assistant
_for Higher-Order Logic, volume 2283 of LNCS. Springer, 2002._
[11] Alfred North Whitehead and Bertrand Russell. Principia Mathematica, volume 3. Cambridge
University Press, Cambridge, 2 edition, 1913.
[12] Paul E. Oppenheimer and Edward N. Zalta. Relations Versus Functions at the Foundations of
Logic: Type-Theoretic Considerations. Journal of Logic and Computation, 21(2):351–374, 2011.
[[13] Freek Wiedijk. The de Bruijn factor. http://www.cs.ru.nl/~freek/factor/.](http://www.cs.ru.nl/~freek/factor/)
[[14] Edward N. Zalta. Principia Logico-Metaphysica. http://mally.stanford.edu/principia.pdf.](http://mally.stanford.edu/principia.pdf)
[Draft/Excerpt; accessed: April 01, 2017].
[15] Edward N. Zalta. Abstract Objects: An Introduction to Axiomatic Metaphysics. Synthese Library.
Springer, 1983.
[16] Edward N. Zalta. Intensional Logic and the Metaphysics of Intentionality. A Bradford book. MIT
Press, 1988.
-----
### Toward AI for Lean, via metaprogramming
Robert Y. Lewis[1][,][2]
1 Carnegie Mellon University, Pittsburgh, PA, USA
2 Vrije Universiteit, Amsterdam, NL
```
[email protected]
```
Lean is a proof assistant being developed at Microsoft Research [3]. The system has been designed from
the beginning to support strong automation. It aims to eventually straddle the line between an interactive
theorem prover with powerful automation, and an automated theorem prover with a verified code base and
interactive mode. A distinguishing feature of Lean is its metaprogramming framework [4]. This framework
is designed to allow users to write their own complex tactics and programs with access to internal system
tools, without needing to touch a line of source code; these tactics and programs exist as part of and in-line
with theory developments.
There has been much recent interest in using AI-based methods for guiding proofs in ITP systems, and
the metaprogramming framework provides a natural way to investigate such methods in Lean. We have
taken preliminary steps toward implementing a relevance filter for heuristic lemma selection, as described in
[2]. The framework also allows connections to external programs. Using an established connection between
Lean and Mathematica [5], it is possible to apply Mathematica’s black-box machine learning tools for similar
purposes. This link is bi-directional, and from within Mathematica, we can use these tools to investigate the
Lean library as a repository of mathematics.
#### 1 Metaprogramming in Lean
Lean is based on the Calculus of Inductive Constructions (CIC) [1]. Dependent type theory is a convenient
language for formalizing mathematical definitions, theorems, and proofs, but it is also a practical and effective
programming language. Alongside its trusted kernel, Lean implements a virtual machine for evaluating Lean
expressions. The VM uses many optimizations to allow for efficient computation: among others, it replaces
terms of type nat with machine integers and terms of type array α n with a native implementation of arrays,
allowing destructive updates when the reference counter for the array is 1 [4]. Declarations in Lean tagged
with the keyword meta are invisible to the kernel and used only for VM computations. For such declarations,
termination checking for recursive calls can be relaxed without making the non-meta language inconsistent.
A number of meta constants, including expr, tactic_state, and environment, are defined to reflect the
underlying implementations of these objects. When the VM evaluates terms that involve these constants,
it associates them with the actual underlying implementations. A tactic can then be thought of as a term
`t : tactic_state →` `tactic_state; more generally, a tactic producing data has type tactic_state →` α ×
```
tactic_state. To invoke such a tactic on a particular goal, Lean constructs the corresponding tactic state
```
and evaluates t applied to this data.
#### 2 A relevance filter, and beyond
Using this framework, we can write metaprograms that fold over a Lean environment – containing the types
and values of all declarations to a point – and produce data. We have followed [2, Section 4], adjusting for
differences between Coq and Lean expressions, to implement simple k-nearest-neighbors and sparse naive
Bayes classifiers to associate types with constants that are likely to appear in their proofs. Since the code is
implemented entirely in Lean, it is relatively easy to extend the feature and label sets.
This work is preliminary and is intended to be part of a future Lean hammer, but can already be used for
informative purposes. Our implementation extends the Lean VM to allow computation with native floats.
While the filter is relatively quick, it does not compare favorably to the more native implementation of the
CoqHammer project. It is an interesting question whether, and how, we could avoid modifying the core VM,
and how we can more efficiently manage big-data computations in the metaprogramming framework.
-----
#### 3 Linking Lean and Mathematica
The metaprogramming framework provides an interface from Lean to command-line and file IO. This allows
users to write tactics that communicate with external programs. In [5] and [6] we describe a bi-directional
interface for exchanging information between Lean and Mathematica. Through this interface, Lean tactics
can query Mathematica for information about Lean expressions – for example, to request the factored form
of a polynomial – and Mathematica can query Lean’s automation and proof library. The translation process
is generic and can be extended as part of theory development.
Mathematica includes a suite of machine learning tools that are designed to work for many applications
without configuration [7]. These tools could be used to implement a more powerful relevance filter than the
simple one described above, without the overhead of designing customized algorithms. This filter would be
accessible from Lean using the link described. These tools are also integrated with Mathematica’s functions
for classifying and displaying data. Following work on computing with arXiv papers, we plan to develop
techniques to explore and visualize the Lean library from within Mathematica.
#### References
[1] T. Coquand and C. Paulin. Inductively defined types. In COLOG-88 (Tallinn, 1988), volume 417 of Lec. Notes
_in Comp. Sci., pages 50–66. Springer, Berlin, 1990._
[2] L. Czajka and C. Kaliszyk. Hammer for Coq, 2017.
[3] L. de Moura, S. Kong, J. Avigad, F. van Doorn, and J. von Raumer. The Lean theorem prover.
_http://leanprover.github.io/files/system.pdf, 2014._
[4] G. Ebner, S. Ullrich, J. Roesch, J. Avigad, and L. de Moura. A metaprogramming framework for formal verification. Proceedings of the ACM on Programming Languages, 1(ICFP):34, 2017.
[5] R. Y. Lewis. An extensible ad hoc interface between Lean and Mathematica. In Proof eXchange for Theorem
_Proving, 2017._
[6] R. Y. Lewis and M. Wu. A bi-directional extensible ad hoc interface between Lean and Mathematica.
_http://www.andrew.cmu.edu/user/rlewis1/leanmm/lean mm cpp.pdf, 2017._
[7] S. Wolfram. An Elementary Introduction to the Wolfram Language. Wolfram Media, Incorporated, 2015.
-----
### Cumulative Effects in Learning[∗]
´
Erik Martin-Dorel and Sergei Soloviev
IRIT, Universit´e de Toulouse, CNRS, Toulouse, France
```
[email protected]
[email protected]
#### Extended Abstract
```
In our previous works we considered the value of information in games (usually in extensive
form or repeated) with respect to winning. It is clear that if the purpose is to win, the complete
knowledge of the opponent behavior is not necessary but some form of learning can be used as
a rule in order to develop a better strategy adapted to the behavior of an opponent.
We studied some situations where a quantitative answer is possible: (i) focusing on 2player Boolean games and relying on probabilistic methods [MDS18][1] and (ii) focusing on the
construction of winning strategies (using hypothesis testing, which can be viewed as a form of
learning) [DSS12].
An interesting observation was that often the probability of existence of a winning strategy
grows very fast with respect to the quantity of information obtained by a player. For example
in the case of Boolean games, the probability of guaranteed win for the first player grows as fast
as 2[2][s], where s is the number of bits of information known on the second player’s strategies.
In some sense, the case of primitive recursive strategies is more interesting because the
recursive function cannot be identified by any finite number of its values, however in the class
of games that we considered, one player could win using a universal function for primitive
recursive functions and finite number of moves of its opponent. In other words, “complete”
learning would require an infinite amount of information, but finite information was sufficient
for winning.
In our present work we study certain game-theoretic models where an interplay between
winning and learning can create interesting cumulative effects.
**Observation. Small changes in probabilistic parameters may have huge effects in cases of**
_• iterated events (queues);_
_• phase transitions;_
_• chain reactions;_
_• a positive feedback._
For example, consider a large population where all members play simultaneously a game
against some opponent that we may call Ignorance (for example, each plays some Boolean game
with a randomly selected formula of some class). Ignorance may always choose an optimal
strategy at his side, he has an unlimited computational power, however he does not know the
choices of the members of the population. Some of them still may win (they may even have a
_[∗This work was partly supported by the FAGames project of LabEx CIMI.](http://www.cimi.univ-toulouse.fr/)_
1A preliminary version of our article is available online as an IRIT research report, cf. the following URL:
```
https://www.irit.fr/publis/ACADIE/IRIT-RR-2017-01-FR.pdf
```
-----
Cumulative Effects in Learning E. Martin-Dorel and S. Soloviev
universal winning strategy in their personal game). Assume now that those who win pass to the
next level and at this level, may know in advance one bit of the strategies chosen by Ignorance
against themselves, and moreover, may help others, for example by telling their friends (defined
by some relation) one bit of the strategies that Ignorance plays against them. This will increase
their probability to win. The win will then let them pass to the next level, and so on.
We think that this approach may be interesting in the models of collective learning. In
particular, it could be applied to the modeling of distributed theorem proving, in order to gain
indications on how to coordinate efforts and better organize feedback. Also, we may try to
apply such an approach to model paradigm shifts.
The starting point of the reflection is the following model that we considered in [MDS18]:
two players A and B control a certain number of Boolean variables (A controls k bits
(a1, a2, . . ., ak) = a and B controls n − _k bits (b1, b2, . . ., bn−k) = b). They play simultane-_
ously a given game represented by a Boolean function F : 2[k] _× 2[n][−][k]_ _→_ **2. Player A wins if**
_F_ (a, b) = true, otherwise player B wins. The game itself (function F ) is assumed to be randomly chosen among the whole class of Boolean functions with n variables, and we specifically
considered the case of Boolean functions generated by a Bernoulli scheme on Boolean vectors
with a parameter 0 < p < 1. We gave quantitative results regarding the probability that there
exists a winning strategy for player A in this model (assuming player A knows s ≥ 0 bits of
information among the strategy (b1, b2, . . ., bn−k) of player B). These results were obtained
symbolically and formally proved in the Coq proof assistant.[2]
We would like to discuss possible generalizations of this model (e.g., regarding the choice
of the probability distribution, regarding the number of players, or regarding the kind of the
strategies. . . ) that could fit our proposed approach to model cumulative effects in learning,
with the ultimate goal to obtain quantitative results that are amenable to formal proof.
Our approach strongly relies on game-theoretic notions but can also be related to the ACRE
learning problem (viz. adversarial classifier reverse engineering [LM05]), which can itself be
viewed as an instance of supervised machine learning.
Beyond the interest on modeling learning scenarios, we believe that the combination of
formal verification techniques and openly discussed (probabilistic) models can move explainable
artificial intelligence forward, in the same way as Belle explains the need to combine logical and
statistical AI in his recent survey [Bel17].
#### References
[Bel17] Vaishak Belle. Logic meets probability: Towards explainable AI systems for uncertain worlds.
[In Carles Sierra, editor, IJCAI 2017, pages 5116–5120. ijcai.org, 2017. doi:10.24963/ijcai.](http://dx.doi.org/10.24963/ijcai.2017/733)
```
2017/733.
```
[DSS12] Evgeny Dantsin, Jan-Georg Smaus, and Sergei Soloviev. Algorithms in Games Evolving in
Time: Winning Strategies Based on Testing. In Isabelle Users Workshop – ITP 2012, 2012.
18 pages.
[LM05] Daniel Lowd and Christopher Meek. Adversarial Learning. In Proceedings of the Eleventh
_ACM SIGKDD International Conference on Knowledge Discovery in Data Mining, KDD ’05,_
[pages 641–647, New York, NY, USA, 2005. ACM. doi:10.1145/1081870.1081950.](http://dx.doi.org/10.1145/1081870.1081950)
[MDS18] Erik Martin-Dorel and Sergei Soloviev. A Formal Study of Boolean Games with Random
Formulas as Payoff Functions. In Herman Geuvers, Silvia Ghilezan, and Jelena Ivetic, editors,
_Post-Proc. of TYPES 2016, Novi Sad, 23/05/2016-26/05/2016, LIPIcs. Schloss Dagstuhl_
Leibniz-Zentrum fur Informatik, 2018. To appear.
[2The accompanying formal proofs are available online at https://sourcesup.renater.fr/coq-bool-games/](https://sourcesup.renater.fr/coq-bool-games/)
-----
### Machine comprehension of math problem text
Takuya Matsuzaki[1] and Noriko H. Arai[2]
1 Nagoya University, Aichi, Japan
[email protected]
2 National Institute for Informatics, Tokyo, Japan
[email protected]
**Abstract**
In a joint work with many people, we have developed a computer system that solves
pre-university level math problems written in natural language. The system is comprised
of two parts. One is a language processing pipeline, which translates a math problem into
a logical formula. The other is a computer algebra system that derives an answer from
the translated problem. In the talk, I will mainly talk about the former part. The main
obstacle in the translation from a natural language into a logical language is the flexibility of
the natural language, which enables us to convey complex meaning in a concise expression
but makes the sentences highly ambiguous for a machine. I will explain how we combat
with it using both logical and statistical means. These two means work complementarily.
First, a statistical model provides a mechanism for selecting a plausible interpretation of a
genuinely ambiguous sentence although such a sentence is rare in a carefully written math
problem. Second, a statistical model works as a ‘mending tape,’ which seals a crack in a
logical model of language. Third, a statistical model can be used as an (ad-hoc) heuristic
function during the search for the most plausible interpretation.
-----
#### If mathematical proof is a game, what are the states and moves?
David McAllester
Toyota Technological Institute at Chicago
Abstract. Alpha Zero has achieved dramatic success in Go, Chess and
Shogi using general deep RL and self-play. Alpha Zero applies a single
learning architecture to dierent states and moves. Intuitively, mathematical proof is also a game. But what are the states and moves of proof?
This talk will argue that the states should be taken to be tableau-style
contexts. We propose a notion of context consisting of variable declarations (let X be foo-space), assumptions (suppose Phi), goal declarations
(to show Phi ...), or focus declarations (consider e). Tableau-style proof
has always been motivated by its naturality its similarity to human
argumentation. Alpha Zero invokes enormous computation in evaluating
states and proposing moves. This talk will also discuss how deep RL
might be applied to evaluating states and proposing moves in tableaustyle argumentation.
-----
### Towards logics for neural conceptors
Till Mossakowski[1] and Razvan Diaconescu[2]
1 Otto-von-Guericke-Universit¨at Magdeburg, Germany
```
[email protected]
```
2 Simion Stoilow Institute of Mathematics of the Romanian Academy, Bucharest, Romania
```
[email protected]
```
**Abstract**
Conceptors are an approach to neuro-symbolic integration based on recurrent neural
networks. We develop a logic for neural conceptors that turns out to be fuzzy. Also, proof
support and theorem proving is discussed.
Neural networks have been successfully used for learning tasks [10], but they exhibit the
problem that the way they compute their output generally cannot be interpreted or explained
at a higher conceptual level [11]. The field of neuro-symbolic integration [1] addresses this
problem by combining neural networks with logical methods. However, most approaches in
the field (like e.g. logic tensor networks [5]) are localist, that is, predicates or other symbolic
items are represented in small sub-networks. This contrasts with the distributed representation
of knowledge in (deep learning) neural networks, which seems to be much more flexible and
powerful.
Jaeger’s conceptors [8, 9] provide such a distributed representation while simultaneously
providing logical operators and concept hierarchies that foster explainability. The basic idea
is to to take a recurrent neural network and not use it for learning through back-propagation,
but rather feed it with input signals, leading to a state space that can be captured as a certain
ellipsoid using a conceptor matrix. Conceptor matrices are positive semidefinite matrices with
singular values (which represent the lengths of the ellipsoid axes) ranging in [0,1].
In [9], Jaeger introduces and studies algebra of conceptors, providing the logical operations
“and”, “or” and “not” (which however satisfy only part of the laws of Boolean algebra) as
well as a scaling operation called aperture adaption, and an interpolation operation. A crucial
advantage of conceptors over ordinary neural networks is that using the algebra of conceptors,
training examples can easily be added to conceptors, without the need of re-training with
the whole sample. Moreover, the L¨owner ordering on conceptor matrices expresses a concept
hierarchy. For reasoning about conceptors, two logics are introduced, an extrinsic and an
intrinsic one. Both logics are based on the conceptor algebra operations. The extrinsic logic
provides a first-order logic with atomic formulas based on the L¨owner ordering. This leads to
two levels of Boolean operations: one within conceptor terms, and one within the first-order
logic. The intrinsic operation avoids this duplication by only working on conceptor terms and
comparing them with a fixed conceptor. In [9], Jaeger formalises both logics as institutions [6],
which are an abstract formalisation of the notion of logical system. Moreover, he states the
open problem of developing a proof calculus and theorem proving support for these logics.
We here argue that both of these logics are not completely adequate for reasoning about
conceptors, because they both can ultimately speak only about the L¨owner ordering, i.e. crisp
statements that can be either true or false. We propose that a more promising approach is to
view conceptors as a kind of fuzzy sets. Indeed, their Boolean operators satisfy the (appropriate
generalisation of) T-norm and T-conorm laws, and form a (generalised) De Morgan Triplet
[12, 7]. This is remarkable, because conceptors have not been introduced as a neuro-fuzzy
approach (and note that neuro-fuzzy approaches generally are localist in the above sense, while
conceptors provide a global distributed representation of knowledge).
-----
easychair: Running title head is undefined. easychair: Running author head is undefined.
We argue that an appropriate conceptor logic should not have crisp but fuzzy statements
as its atomic constituents:
_• classification of an N_ -dimensional signal vector z by a N ×N conceptor matrix C, yielding
the fuzzy truth value z[T] _Cz/N (which can be seen as fuzzy set membership),_
_• a “fuzzy subset” relation C1 ≤_ _C2 between conceptors._
Atomic formulas use conceptor terms formed with the same operations as in Jaeger’s logics.
On this basis, we develop a many-valued institution [3] for conceptors. A (fuzzy) first-order
logic on top of that can be obtained using general methods of defining fuzzy connectives and
quantifiers.
Reasoning in such a logic can be done using the algebraic and order-theoretic laws of conceptors. Theorem proving support can be obtained by a translation to first-order or higher-order
logic (giving approximation or exact capturing of the real numbers), as well as by using SMT
solving over the real numbers.
Fuzzy reasoning can be based on either crisp or graded consequence relations (see [4]). The
latter can capture fuzzy reasoning in a more fine-grained way. Future work will consider the
development of a graded proof calculus that directly captures graded consequence.
#### References
[1] Tarek R. Besold, Artur S. d’Avila Garcez, Sebastian Bader, Howard Bowman, Pedro M. Domingos,
Pascal Hitzler, Kai-Uwe K¨uhnberger, Lu´ıs C. Lamb, Daniel Lowd, Priscila Machado Vieira Lima,
Leo de Penning, Gadi Pinkas, Hoifung Poon, and Gerson Zaverucha. Neural-symbolic learning
and reasoning: A survey and interpretation. CoRR, abs/1711.03902, 2017.
[2] R. Diaconescu. Institution-independent Model Theory. Birkhuser, 2008.
[3] Razvan Diaconescu. Institutional semantics for many-valued logics. _Fuzzy Sets and Systems,_
218:32–52, 2013.
[4] Razvan Diaconescu. Graded consequence: an institution theoretic study. _Soft Comput.,_
18(7):1247–1267, 2014.
[5] Ivan Donadello, Luciano Serafini, and Artur S. d’Avila Garcez. Logic tensor networks for semantic
image interpretation. In Carles Sierra, editor, Proceedings of the Twenty-Sixth International Joint
_Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19-25, 2017, pages_
1596–1602. ijcai.org, 2017.
[6] J. A. Goguen and R. M. Burstall. Institutions: Abstract model theory for specification and
programming. Journal of the Association for Computing Machinery, 39:95–146, 1992. Predecessor
in: LNCS 164, 221–256, 1984.
[7] M.M. Gupta and J. Qi. Theory of T-norms and fuzzy inference methods. Fuzzy Sets and Systems,
40:431–450, 1991.
[8] Herbert Jaeger. Conceptors: an easy introduction. CoRR, abs/1406.2671, 2014.
[9] Herbert Jaeger. Controlling recurrent neural networks by conceptors. CoRR, abs/1403.3369, 2014.
[10] Yann LeCun, Yoshua Bengio, and Geoffrey E. Hinton. Deep learning. Nature, 521(7553):436–444,
2015.
[11] E. Smith and S. Kosslyn. Cognitive psychology: Mind and brain. Upper Saddle River, NJ: PrenticeHall Inc., 2006.
[12] Lotfi A. Zadeh. Fuzzy sets. Information and Control, 8(3):338–353, 1965.
-----
### Designing Games of Theorem Proving
Yutaka Nagashima
CIIRC, CTU, Prague / The University of Innsbruck, Innsbruck
```
first name.last [email protected]
```
**Abstract**
“Theorem proving is similar to the game of Go. So, we can probably improve our provers
using deep learning, like DeepMind built the super-human computer Go program, AlphaGo
[1].” Such optimism has been observed among participants of AITP2017. But is theorem
proving really similar to Go? In this paper, we first identify the similarities and differences
between them and then propose a system in which various provers keep competing against
each other and changing themselves until they prove conjectures provided by users.
#### 1 The Game of Go and Theorem Proving
**Formally defined rules [similarity].** Both the games of Go and theorem proving have
algorithms to evaluate the results. In the game of Go, one can judge the result of each game
when it is over by counting the stones and spaces for each player, and no ambiguity is left
in deciding the result of a game. In theorem proving, when one finds a proof, others can
systematically check if the alleged proof is a valid proof or not.
**Expressive power of the system [difference].** Even though both systems are based on a
set of simple rules, the expressive power of these systems differ. Depending on the underlying
logics, a theorem proving task can involve advanced concepts such as abstraction, universal
quantification, existential quantification, and polymorphism, which Go scores cannot express
natively. This is especially true for more expressive logics such as classical higher-order logics
or variants of calculus of constructions, where stronger proof automation is needed.
**Amount of available training data [difference].** Some theorem proving researchers boast
that they have large proof corpora. For example, the Isabelle community has the Archive of
Formal Proofs (AFPs) [2], consisting of more than 1.5 millions of lines of code and 100 thousands
lemmas. Unfortunately, even though these proof corpora are large for the small community of
theorem provers, they are small compared to the data deployed in other domains.
**Preference towards small data [difference].** The size of the community is not the only
reason of small data available in the theorem proving community: logicians and mathematicians
have developed expressive logics to describe general ideas in a concise manner. Combined with
the trade-off between proof automation and expressive power of underlying logic, this is doubly
unfortunate: the more expressive logic we use, the less proof automation we have, but the more
expressive the logic is, the less training data we can expect, which makes it hard to improve
the proof automation for expressive logics using machine learning techniques.
**Self-playability [similarity/difference].** One might suspect that large data are not necessary to develop a powerful proof automation tool using machine learning. After all, DeepMind
has made AlphaGo Zero [3] stronger than any previous versions of AlphaGo via self-play without using data from human games. Unfortunately, even though both Go and theorem proving
-----
are based on clearly defined rules, theorem proving is not a two-player game by default. In the
rest of this paper, we propose an approach to introducing self-playability to theorem proving.
#### 2 The Design of Self-playable Games of Theorem Proving
One straightforward design of self-playable games of theorem proving is as follows: (1) prepare
a set of proof obligations from existing proof corpora, (2) let two competing provers try to prove
these proof obligations, (3) count how many obligations each prover discharges, (4) consider the
prover that solves more obligations as the winner, and the other one as the loser. We can use this
naive approach as a part of reinforcement learning or evolutionary computation to optimize our
provers for proof obligations that have already been proved. However, this approach is probably
not powerful enough to improve provers for conjectures that are significantly different from the
theorems in the training data
For example, let us assume that we enhance our prover, say P, via self-play using 100
theorems and their proofs in the AFPs. Since we already know how to prove these theorems,
we can improve P, so that P can prove all of the 100 theorems within a reasonable time-out.
However, when we try to improve P to discharge a new conjecture, say Goldbach’s conjecture, we
will find ourselves at a loss of training data: Currently, there is no known proofs of Goldbach’s
conjecture or auxiliary lemmas that are verified to be useful to prove Goldbach’s conjecture.
If we add Goldbach’s conjecture to the above dataset, the improvement via self-play would
saturate after producing a prover that can discharge the 100 theorems from the AFP but not
Goldbach’s conjecture: since the gap between the theorems from the AFPs and Goldbach’s
conjecture is too large, minor mutations to P’s variants cannot produce a useful observable
difference in the result of the game. What we need here is a mechanism to produce conjectures
that we can reasonably expect to be useful to prove our target conjecture (Goldbach’s conjecture
in this example) but not too difficult for our current prover P.
Therefore, we propose to treat conjecturing and proof search as one problem. Of course, we
cannot be 100% sure which conjecture is useful to train our prover for Goldbach’s conjecture,
since nobody has proved it yet. But if we consider a heuristic proof search as the exploration
of an and-or tree, we can estimate how important each node in the tree is from the search
heuristics of the prover. Furthermore, given a long time-out, we can expect that the prover can
discharge some of emerging subgoals, even if the prover cannot discharge the root-node, which
corresponds to the target conjecture (Goldbach’s conjecture, in this example).
Our idea is to use these proved subgoals to judge the competence of other versions of prover
```
P. First, we produce two versions of our prover P by mutation. Let us call them Pa and Pb,
```
respectively. Using the approach explained above, we let Pa produce a dataset Da and let Pb
produce Db. Now, we let Pb try to prove the theorems in Da, and let Pa try to prove the theorems
in Db. When Pa and Pb run out of time, we sum up the estimated values of theorems proved by
each prover (Pa, for example). Note that it was the opponent prover (Pb in this example) that
has decided the value of each theorem in each dataset (Db in this case) when finding proofs of
these subgoals for the first time. The prover that has gained more value is the winner of the
game, and the other is the loser. Then, we keep running this game by mutating the winner until
we produce a prover that can discharge the target conjecture. Since this process generates new
conjectures tagged with their estimated values from the target conjecture in each iteration, we
expect this approach continues producing conjectures useful to prove the target conjecture.
We are still in the early stage of the design. We might generalize this idea to n-player games
to avoid over-fitting. For the moment, we prefer the design of the game to be irrelevant to any
of underlying logics, ML algorithms for search heuristics, and mutation algorithms.
-----
#### Acknowledgement
This work was supported by the European Regional Development Fund under the project
AI&Reasoning (reg. no. CZ.02.1.01/0.0/0.0/15 003/0000466).
#### References
[1] David Silver, et. al. Mastering the game of Go with deep neural networks and tree search Nature
529, January 2016
[2] Gerwin Klein, Tobias Nipkow, and Larry Paulson. The Archive of Formal Proofs.
[3] David Silver, et. al. Mastering the game of Go without human knowledge Nature 550, October
2017
-----
#### Who cares about Euclidean geometry?
Mirek Ol²ák
Charles University
Abstract. We will discuss several reasons why Euclidean geometry could
be a good topic for an AITP project.
-----
### ATP Guidance for Learning Premise Selection
Bartosz Piotrowski and Josef Urban
Czech Technical Univeristy in Prague, Prague, Czech Republic
**Premise Selection as a Machine Learning Problem**
The most efficient methods for premise selection are based on data-driven/machine-learning
approaches. Such methods work as follows. Let T be a set of theorems with their proofs. Let
_C be a set of conjectures without proofs, each associated with a set of available premises that_
can be used to prove them. Having this, we want to learn a (statistical) model from T, which
for each conjecture in c ∈ _C will rank its available premises according to their relevance for_
producing an ATP proof of c. Two different machine learning settings can be used for this task:
1. multilabel classification: we treat premises used in the proofs as opaque labels and we
create a model capable of labeling conjectures based on their features,
2. binary classification: here the aim of the learning model is to recognize pairwise-relevance
of the conjecture-premise pairs, i.e. to decide what is the chance of a premise being relevant
for proving the conjecture based on the features of both the conjecture and the premise.
Most of the machine learning methods for premise selection have so far used the first setting [6, 5, 3]. This includes fast and robust machine learning algorithms such as naive Bayes
and K nearest neighbors (k-NN) capable of multilabel classification with many examples and
labels. This is needed for large formal libraries with many facts and proofs. There are however
several reasons why the second approach may be better:
1. Generality: in binary classification it is easier to estimate the relevance of conjecture_premise pairs where the premise was so far unseen (i.e., not in the training data)._
2. State-of-the-art ML algorithms are often capable of learning subtle aspects of complicated problems based on the features. The multilabel approach trades the rich feature
representation of the premise for its opaque label.
3. Many state-of-the-art ML algorithms are binary classifiers or they struggle when performing multilabel classification for a large number of labels.
Recently, substantial work [2] has been done within the second setting. In particular, applying
deep learning to premise selection has improved state of the art in the field.
**Premise Selection in Binary Setting with Multiple Proofs**
The availibility of multiple ATP proofs makes premise selection different from conventional
machine learning applications. This may be important especially in the binary classification
setting. The ML algorithms for recognizing pairwise relevance of conjecture-premise pairs require good-quality data consisting of two (preferably balanced) classes of positive and negative
examples. But there is no conventional way how to construct such data. For every true conjecture there are infinitely many proofs, based on many different sets of premises.
Consider the following frequent situation: a conjecture c can be ATP-proved with two sets
of axioms: _p1, p2_ and _p3, p4, p5_ . Learning only from one of the examples as positives and
_{_ _}_ _{_ _}_
presenting the other as negative conjecture-premise pairs may considerably distort the learned
notion of a useful premise. This differs from our previous research done in the multilabel
setting [7], where using only one proof typically helped. In the multilabel setting negative data
are typically not used by fast ML algorithms such as naive Bayes and k-NN. They just aggregate
different positive examples into the final ranking. The talk will discuss our experiments that
-----
ATP-guidance for Learning Premise Selection Piotrowski, Urban
attempt to properly take into account multiple ATP proofs while training binary-classification
ML algorithms.
The following observations should be taken into account when designing the ML experiments. The number of ATP proofs may be high and it is intractable to initially generate them
all. Also, some of the proofs are easier to find than other proofs by a particular ATP. And
some of the premises appear more often in the proofs of a given conjecture. This leads to a
MaLARea-style [9] feedback loop between the learner and an ATP, where we gradually learn
which conjecture-premise pairs are relevant for this particular ATP and which are not.
**First Experiments Combining Binary Classification and ATP Feedback**
We use the E prover as the underlying ATP and the XGBoost [4] system to learn pairwiserelevance of conjecture-premise pairs. XGBoost implements a tree-based, gradient boosting ML
algorithm, which has proved efficient in many ML competitions. As a benchmark we use the
MPTP2078 [1] subset of MML. Our feedback loop between E [8] and XGBoost is as follows:
1. We split the ATP-provable (60s CPU limit) theorems into a training set and a test set.
Our initial data consists of theorems with their proofs. Initially, there is only one proof per
theorem. These are the positive examples in our initial training set. Negative examples
are constructed by choosing similar (in terms of feature similarity) examples not present
in the positive class. We train an initial classifier C0 on such training data and set i = 0.
2. We randomly choose a subset S of 10% of the theorems from the training data. For each
_c ∈_ _S we ask Ci to rank the premises available for c._
3. E is run on the theorems in S with initial segments of the 1, 2, ..., 512 top-ranked premises.
4. The positive part of the training data set is updated with all pairs (c, p) where c ∈ _S and_
_p was a premise used in at least one proof we already know for c. The negative part is_
updated with all pairs (c, p) where c ∈ _S and p was not used in any proof of c, and p was_
ranked by Ci at least at position 2 × Nc, where Nc is the number of all premises used in
all known proofs for c. For each newly introduced training example, we randomly drop
an old example from training set.
5. We train classifier Ci+1 on the such updated training set.
6. We measure the ATP performance of Ci+1’s predictions on the test set.
7. We go back to step (2) and repeat the whole procedure for i = i + 1 as long as new proofs
are found or the performance of the trained model improves on the test set.
For a comparison, we also simultaneously train another XGBoost model Cconst with the same
parameters but on the constant training set we created in the step 1. After each cycle we
compare the ATP-performance of the two models. Already the model trained in the first cycle
outperforms the model trained in the standard way and the difference increases up to 20 – 25th
[cycle, see the table below, and the plot at http://bartoszpiotrowski.pl/plot-aitp18.png.](http://bartoszpiotrowski.pl/plot-aitp18.png)
In addition to several related ML/ATP MaLARea-style feedback loops, related work includes [2]. There, large improvement are obtained with negative mining done however without
interaction with an ATP. We believe that our setting has several advantages:
1. The machine learner learns on multiple proofs.
2. The frequency of premises appearing in these proofs has influence on the learning.
3. Evolving the data set throughout the training may help prevent overfitting.
4. Randomly dropping data might help to get rid of contradictory examples and gradually
evolve the classifier to optimally reflect pairwise relevance for a given ATP.
Cycle of training (i) 0 5 10 15 20 25
Model trained with ATP-negative-mining (Ci) 44% 67% 70% 72% 73% 73%
Model trained on a constant dataset (Cconst) 44% 50% 51% 53% 53% 53%
-----
ATP-guidance for Learning Premise Selection Piotrowski, Urban
#### References
[1] Jesse Alama, Tom Heskes, Daniel Kühlwein, Evgeni Tsivtsivadze, and Josef Urban. Premise selection for mathematics by corpus analysis and kernel methods. J. Autom. Reasoning, 52(2):191–213,
2014.
[2] Alex A. Alemi, Francois Chollet, Geoffrey Irving, Christian Szegedy, and Josef Urban, editors.
_DeepMath - Deep Sequence Models for Premise Selection, 2016._
[3] Jasmin Christian Blanchette, David Greenaway, Cezary Kaliszyk, Daniel Kühlwein, and Josef Urban. A learning-based fact selector for Isabelle/HOL. J. Autom. Reasoning, 57(3):219–244, 2016.
[4] T. Chen and C. Guestrin. Xgboost: A scalable tree boosting system. 2016.
[5] Cezary Kaliszyk and Josef Urban. Learning-assisted automated reasoning with Flyspeck. J. Autom.
_Reasoning, 53(2):173–213, 2014._
[6] Cezary Kaliszyk and Josef Urban. Mizar 40 for mizar 40. _Journal of Automated Reasoning,_
55(3):245–256, Oct 2015.
[7] Daniel Kuehlwein and Josef Urban. Learning from multiple proofs: First experiments. In Pascal
Fontaine, Renate A. Schmidt, and Stephan Schulz, editors, PAAR-2012, volume 21 of EPiC Series,
pages 82–94. EasyChair, 2013.
[8] Stephan Schulz. System description: E 1.8. In Kenneth L. McMillan, Aart Middeldorp, and Andrei
Voronkov, editors, LPAR, volume 8312 of LNCS, pages 735–743. Springer, 2013.
[9] Josef Urban, Geoff Sutcliffe, Petr Pudlák, and Jiří Vyskočil. MaLARea SG1 - Machine Learner for
Automated Reasoning with Semantic Guidance. In Alessandro Armando, Peter Baumgartner, and
Gilles Dowek, editors, IJCAR, volume 5195 of LNCS, pages 441–456. Springer, 2008.
-----
### Dynamic strategy priority: empower the strong and abandon the weak
Michael Rawson and Giles Reger
University of Manchester, Manchester, UK
Many modern automated theorem provers (e.g. CVC4[1] [1], E [8], iProver [2], Vampire [3])
rely on portfolio modes [7] utilising from tens to hundreds of distinct strategies, of which only
a few might solve a hard problem quickly. Typically, a portfolio of strategies has a pre-defined
order that the prover will execute the strategies in, running each until a strategy succeeds.
Portfolios are important as, in practice, there is no best strategy i.e. it is uncommon that two
hard problems are efficiently solvable by the same strategy. However, portfolio execution is not
without problems: selecting the optimal ordering and time allocation is hard in general [6], and
produces overly-rigid, brittle engineering when applied to specific domains, such as those found
in TPTP [9]. Moreover, for any particular problem, some lengthy strategies that are successful
on other problems are doomed to failure from the outset, but are left to run unchecked by the
prover, wasting time that could be spent on more productive strategies.
In this work we first demonstrate correlation between trends in dynamic properties of proof
search, and the success or failure of a strategy. We then utilise this to implement strategy
scheduling, prioritising those strategies most likely to succeed. This approach differs from
previous work [4,5,8] which attempts to predict successful strategies a priori from static features
of the input problem; instead we tip running strategies for success based on run-time features
and use this information to make scheduling decisions. Initial experiments on Vampire produce
a performant neural-network that achieves classification accuracy of 81% (±2%).
**Obtaining and filtering Vampire execution data.** Modifying Vampire to log execution
data for different strategies obtained from its primary portfolio mode[2] is straightforward, but
there are choices to be made along the way. First, which data sources are interesting? As a proof
of concept, we focus mostly on the numerical data (e.g. the number of generated clauses) that is
immediately available in the prover environment, but there is scope here for non-numeric and/or
derived data sources that could provide greater insight into the proof state. Data was logged
at a fixed interval of resolution steps — unfortunately, this does not necessarily correspond
to a fixed amount of time. Addressing this is left as future work. Overall, this methodology
produces extremely voluminous, irregular data. We chose to apply a rolling time average to
normalise/reduce trace size, then allow the neural net to do its own feature selection.
**Identifying and predicting successful strategies.** Being able to predict which strategies
are going to succeed, and which will fail (or exceed the time limit) at first seems unlikely.
However, it is known that the “slowly-growing search space” maxim, which states that strategies
which minimise the number of derived clauses over time are more likely to succeed, is an effective
heuristic for finding good strategies in saturation-based theorem proving [6]. Since the data
we use includes the number of derived clauses, among many other features, it appears more
plausible that a machine-learnt approach might work at least as well as the slow-growth heuristic
1Whilst SMT solvers are less reliant on heuristic strategies than saturation-based techniques, they still
typically employ various strategies, for example for quantifier instantiation.
2CASC-mode, a portfolio designed for the CASC competition [10].
-----
Dynamic Strategy Priority Rawson and Reger
alone. We engineer a prediction algorithm that attempts to partition traces into “succeeding”
and “failing” classes using a simple neural network. Conveniently, these methods do not usually
produce a binary output, but instead some f (X) ∈ [0, 1] which might be seen as the “level of
confidence” in success of the trace X. This success score can be used to apply an ordering to
executing strategies, allowing “smart” dynamic scheduling of strategies.
**Smart scheduling for Vampire** We show that this abstract predictor can be used in a
concrete implementation for the Vampire prover. In the modified prover, it is used to run
several strategies from Vampire’s portfolio in a time-slicing scheduler: at each slice, the most
promising strategies are run. The eventual aim is to improve Vampire’s overall performance, if
not in the number of total theorems proved (this will likely not change in this experiment — if
the entire strategy schedule runs and fails, it doesn’t matter which way it is ordered), but in the
time taken to prove problems. Current benchmark results indicate a 10% average improvement
in time, in exchange for losing some problems in the benchmark.
For this demonstration we focus on maximising the accuracy and efficiency of the predictor
(a perfect predictor would schedule the best strategy first, every time!), rather than tweaking
performance of the scheduler. As well as improvements to our prediction techniques, further
research might include designing scheduling algorithms which keep predictions as up-to-date
as possible, maximise processor utilisation, minimise memory usage/swapping, reduce contextswitching overhead, or even minimise calls to the predictor, and observing the effect on prover
performance.
#### References
[1] Morgan Deters, Andrew Reynolds, Tim King, Clark W. Barrett, and Cesare Tinelli. A tour of
CVC4: how it works, and how to use it. In Formal Methods in Computer-Aided Design, FMCAD
_2014, Lausanne, Switzerland, October 21-24, 2014, page 7, 2014._
[2] K. Korovin. iProver – an instantiation-based theorem prover for first-order logic (system description). In A. Armando, P. Baumgartner, and G. Dowek, editors, Proceedings of the 4th International
_Joint Conference on Automated Reasoning, (IJCAR 2008), volume 5195 of Lecture Notes in Com-_
_puter Science, pages 292–298. Springer, 2008._
[3] Laura Kov´acs and Andrei Voronkov. First-order theorem proving and Vampire. In International
_Conference on Computer Aided Verification, pages 1–35. Springer, 2013._
[4] Daniel K¨uhlwein, Stephan Schulz, and Josef Urban. E-MaLeS 1.1. In International Conference
_on Automated Deduction, pages 407–413. Springer, 2013._
[5] William McCune and Larry Wos. Otter — the CADE-13 competition incarnations. Journal of
_Automated Reasoning, 18(2):211–220, 1997._
[6] Giles Reger and Martin Suda. Measuring progress to predict success: Can a good proof strategy
be evolved? AITP 2017, 2017.
[7] Giles Reger, Martin Suda, and Andrei Voronkov. The challenges of evaluating a new feature in
Vampire. In Vampire Workshop, pages 70–74, 2014.
[8] Stephan Schulz. E — a brainiac theorem prover. AI Communications, 15(2, 3):111–126, 2002.
[9] G. Sutcliffe. The TPTP problem library and associated infrastructure, from CNF to TH0, TPTP
v6.4.0. Journal of Automated Reasoning, 59(4):483–502, 2017.
[10] Geoff Sutcliffe and Christian Suttner. The state of CASC. AI Communications, 19(1):35–48, 2006.
-----
#### Implementation of Lambda-Free Higher-Order Superposition
Petar Vukmirovic
Vrije Universiteit Amsterdam
Abstract. In the last decades, rst-order logic has emerged as a standard language for describing a large number of mathematical theories.
Many rst-order automatic theorem provers have been developed. Higherorder logic enables one to describe more theories and to describe some
theories more succinctly, but higher-order provers are much less mature
than their rst-order counterparts. In this work, we extend the state-ofthe-art rst-order prover E to lambda-free higher-order logic, resulting
in the new prover hoE. We devise generalizations of E's indexing data
structures, as well as algorithms like matching and unication. Generalizations we give exhibit exactly the same behavior and time complexity
as the original E on rst-order problems. Furthermore, experimentation
showed that on lambda-free higher-order problems, hoE is 20% faster on
average than E with the traditional encoding of higher-order terms.
-----
### Disambiguating ProofWiki into Mizar: First Steps
Jiˇr´ı Vyskoˇcil[∗] Josef Urban[∗]
Czech Technical University in Prague Czech Technical University in Prague
**1** **Autoformalization and Pr∞fWiki**
The talk will describe progress in the project of automatically formalizing informal mathematics
by using statistical parsing methods and large-theory automated reasoning. In the first part of
the talk we will summarize the overall autoformalization approach and its performance on the
_ambiguated Flyspeck [3] and Mizar [2] corpora as recently described in [5, 6]. The second part_
will present our initial adventures in the land of strictly informal mathematics: trying to align
and learn the translation between the informal Pr∞fWiki and the formal Mizar corpora.
The Pr∞fWiki[1] project was started in 2008, aiming to provide an online platform for explaining and editing mathematical knowledge, particularly proofs. Each Pr∞fWiki page contains one
(proved) fact or definition. The pages use wiki formatting and MathJax[2] rendering of TE[X. Con-]
cepts, facts and proofs are presented in great detail[3] and they link to the facts and concepts that
they directly depend on. The language is informal but quite regular: our early exploration [7]
has shown that the top 100 generalized proof sentences cover about 50% of the corpus.
Pr∞fWiki has no formal syntax or semantics and there is no formal checker/verifier. However, in the recent work of Bancerek done within the AI4REASON project, Pr∞fWiki has been
extended by 500 new definitions/theorems that are linked to their formal counterparts in the
Mizar library (MML). Our present goal is to develop methods for parsing and translating the
Pr∞fWiki-style informal texts into formal Mizar-style parse trees. The formal parse trees can
then be typechecked [6], translated into first-order logic [10], and verified with large-theory ATP
methods for Mizar [4], providing semantic feedback to the statistical parsing methods.
**2** **Lexical Analyzer for Informal Mathematics and Pr∞fWiki**
For the earlier autoformalization experiments we have used a simple lexical analyzer based on
whitespace-separated tokens. This was sufficient, because the ambiguous texts were produced
by our informalization of the formal corpora using spaces as token separators.
Pr∞fWiki and many informal math resources however typically combine TE[X commands with]
text without obvious token separators. Compared to ordinary English, which usually has simple
word separation, mathematicians often compose operators and arguments together, and they use
different separators in different situations. Sometimes they omit the operators altogether (e.g.
multiplication). This prevents us from using standard lexical analyzers developed by the Natural
Language Processing (NLP) community. In the talk, we will explain how we have modified our
CYK-based parser so that it can parse several possible tokenizations of all words in one pass.
Our experiments show that this does not significantly affect the efficiency of parsing.[4]
**3** **From Pr∞fWiki to Mizar**
Consider the following short example[56] of the Pr∞fWiki–Mizar alignment:
_∗Funded by the ERC project no. 649043 – AI4REASON._
[1https://proofwiki.org](https://proofwiki.org)
[2https://www.mathjax.org/](https://www.mathjax.org/)
3In particular, the proof style is very reminiscent of the Ja´skowski-style natural deduction used by Mizar.
4The slowdown due to the modification measured on the informalized MML is 9%.
[5https://proofwiki.org/wiki/Singleton_is_Chain](https://proofwiki.org/wiki/Singleton_is_Chain)
[6http://www.mizar.org/version/current/html/orders_2.html#T8](http://www.mizar.org/version/current/html/orders_2.html#T8)
-----
Disambiguating ProofWiki into Mizar Vyskoˇcil and Urban
PW code `Let $\left(S, \preceq\right)$ be an ordered set.` `Let $x \in S$.` `Then $\left\ x\right\$ is`
```
a chain of $\left(S, \preceq\right)$.
```
PW display Let (S, ⪯) be an ordered set. Let x ∈ _S. Then {x} is a chain of (S, ⪯)._
Mizar `for A being non empty reflexive RelStr for a being Element of A holds {a} is Chain of A`
Mizar parse `(Bool for (Varlist (Set (Var A))) being (Type (@ListOfAdjectives (Adjective ($#~nv2_struct_0`
```
non (Attribute ($#nv2_struct_0 empty)))) (Adjective (Attribute ($#nv3_orders_2 reflexive))))
($#nl1_orders_2 RelStr)) (Bool for (Varlist (Set (Var a))) being (Type (@ListOfAdjectives)
($#nm1_struct_0 Element) of (Set (Var A))) holds (Bool (Set ($#nk6_domain_1 {) (Set (Var a))
($#nk6_domain_1 })) is (Type (@ListOfAdjectives) ($#nm2_orders_2 Chain) of (Set (Var A))))))
```
Parsing Pr∞fWiki into a Mizar-like parse tree requires transformations of varied difficulty:
1. The Pr∞fWiki chain can map directly to the Mizar-style subtree ($#nm2_orders_2
```
chain), possibly additionally aligning chain with Chain as synonyms.
```
2. The Pr∞fWiki TE[X text][ "][\][left][\{ {][x][}\][right][\}]["][ needs to be mapped to the Mizar-style]
subtree `(Set ($#nk6_domain_1 {) (Set (Var x)) ($#nk6_domain_1_part_1 })).`
3. "ordered set" needs to be mapped to Mizar "non empty reflexive RelStr".
4. "Let...Then..." needs to be mapped to Mizar as "for...holds...". Etc.
(1) is just a new grammar rule that can be learned from the treebank. The other examples however require more complex tree transformations that cannot be expressed as simple grammar
rules. As a general approach, we have introduced a grammar extension [7] that allows evaluation of arbitrary Lisp-like programs at nonterminal positions. This allows us to use our existing
PCFG infrastructure for CYK-like parsing with powerful ways of modifying already parsed parts
of the input. Following is an example of a subtree (and code) that performs the mapping (2):
```
(Set ("PW_TeX_Singleton@@@
(lambda (L LSB LB X RB R RSB)
(list (gtree ’$#nk6_domain_1 ’{) X (gtree ’$#nk6_domain_1_part_1 ’})))"
"\left" "\{" "{" (Set (Var "x" )) "}" "\right" "\}" ))
```
When the whole subtree is parsed, the Lisp-like code in the nonterminal "PW_TeX_Singleton@@@..."
(in italics) is executed using its children/subtrees as arguments, resulting in the tree in (2).
We hope that in many cases, such Lisp-like programs will be semi-automatically learnable by
the following bootstrapping procedure:
1. The parser run on the corpus of Pr∞fWiki texts will identify the parts of input that cannot
be parsed yet. This can be done by using a special low-probability nonterminal "UNKNOWN"
that propagates through most of the grammar rules, thus marking the failed fragments.
2. The failed fragments will be aligned with the corresponding Mizar subtrees.
3. This yields a corpus of Pr∞fWiki - Mizar pairs where the parsing fails so far.
4. This corpus can be mined for common frequent patterns by symbolic learning methods such
as genetic programming and inductive logic programming. Such methods can gradually
create a corpus of more and more advanced Lisp-like functions that build on each other
(also during the learning phase).
5. Some frequently occurring pairs will probably need transformations that are beyond the
means of the learning methods (at least initially). Having the statistics of such failed pairs
will however still be useful for focusing the human annotators on the bottlenecks.
6. As in the Flyspeck and Mizar experiments, the most probable parses will be subjected
to typechecking and large-theory ATP, using the whole Mizar library as a background
knowledge. We will try to use the (parsed) steps in the Pr∞fWiki proofs as additional
lemmas useful for structuring the ATP work into smaller parts.
7Compared to some existing frameworks [8, 9], our extension is lightweight and its integration of more advanced
probabilistic models (compared to [1]) has already been tested at large on the Flyspeck project [5].
-----
Disambiguating ProofWiki into Mizar Vyskoˇcil and Urban
**References**
[1] Krasimir Angelov and Peter Ljungl¨of. Fast statistical parsing with parallel multiple context-free
grammars. In Gosse Bouma and Yannick Parmentier, editors, Proceedings of the 14th Conference of
_the European Chapter of the Association for Computational Linguistics, EACL 2014, April 26-30,_
_2014, Gothenburg, Sweden, pages 368–376. The Association for Computer Linguistics, 2014._
[2] Adam Grabowski, Artur Korni lowicz, and Adam Naumowicz. Mizar in a nutshell. J. Formalized
_Reasoning, 3(2):153–245, 2010._
[3] T. Hales, M. Adams, G. Bauer, D. Tat Dang, J. Harrison, T. Le Hoang, C. Kaliszyk, V. Magron, S. McLaughlin, T. Tat Nguyen, T. Quang Nguyen, T. Nipkow, S. Obua, J. Pleso, J. Rute,
A. Solovyev, A. Hoai Thi Ta, T. N. Tran, D. Thi Trieu, J. Urban, K. Khac Vu, and R. Zumkeller.
A formal proof of the Kepler conjecture. Forum of Mathematics, Pi, 5, 2017.
[4] Cezary Kaliszyk and Josef Urban. MizAR 40 for Mizar 40. J. Autom. Reasoning, 55(3):245–256,
2015.
[5] Cezary Kaliszyk, Josef Urban, and Jiˇr´ı Vyskoˇcil. Automating formalization by statistical and semantic parsing of mathematics. In Mauricio Ayala-Rinc´on and C´esar A. Mu˜noz, editors, Interactive
_Theorem Proving - 8th International Conference, ITP 2017, Bras´ılia, Brazil, September 26-29, 2017,_
_Proceedings, volume 10499 of Lecture Notes in Computer Science, pages 12–27. Springer, 2017._
[6] Cezary Kaliszyk, Josef Urban, and Jiˇr´ı Vyskoˇcil. System description: Statistical parsing of informal[ized mizar formulas. In SYNASC 2017, 2017. To appear. https://www.ciirc.cvut.cz/~urbanjo3/](https://www.ciirc.cvut.cz/~urbanjo3/synasc17.pdf)
```
synasc17.pdf.
```
[7] Cezary Kaliszyk, Josef Urban, Jiˇr´ı Vyskoˇcil, and Herman Geuvers. Developing corpus-based translation methods between informal and formal mathematics: Project description. In Stephen M. Watt,
James H. Davenport, Alan P. Sexton, Petr Sojka, and Josef Urban, editors, Intelligent Computer
_Mathematics - International Conference, CICM 2014, Coimbra, Portugal, July 7-11, 2014. Proceed-_
_ings, volume 8543 of LNCS, pages 435–439. Springer, 2014._
[8] Aarne Ranta. Grammatical Framework: Programming with Multilingual Grammars. CSLI Publications, Stanford, 2011. ISBN-10: 1-57586-626-9 (Paper), 1-57586-627-7 (Cloth).
[9] Hiroyuki Seki, Takashi Matsumura, Mamoru Fujii, and Tadao Kasami. On multiple context-free
grammars. Theor. Comput. Sci., 88(2):191–229, 1991.
[10] Josef Urban. MPTP 0.2: Design, implementation, and initial experiments. J. Autom. Reasoning,
37(1-2):21–43, 2006.
-----
### Building an Auto-Formalization Infrastructure through Text-Mining on Mathematical Literature: Project Description
Qingxiang, Wang[12]
1 University of Innsbruck
2 Czech Technical University in Prague
```
[email protected]
```
Formally checking the correctness of research-level mathematical proofs would inevitably
involve the formidable task of formalizing mathematics knowledge in existing literature. Current formalization corpora however, despite many man-years of manual effort, barely capture
the majority of mathematical knowledge beyond undergraduate mathematics curriculum. The
abundance of raw mathematical literature freely available on the internet encourages a machine
learning approach to formalization. Deep learning [1], being the most effective form of machine learning at present, and having shown spectacular results in planning [2] and translation
tasks [3], should be helpful to advance research in theorem proving. However, despite initial
applications of deep learning in theorem proving [4, 5, 6, 7], we have not seen any application
that can directly leverage the abundance of natural language proofs.
Previous works trying to bridge the gap between informal and formal mathematics have
gradually drawn the conclusion that machine learning approach should be used [8]. Various
algorithms [9, 10] have been experimented with and training data sets have been extracted from
Flyspeck [11] and Mizar [12, 13] throughout the last three years. In this research project, we
will propose a set of deep learning enabled tools that gradually transform informal mathematics
into formal mathematics.
The transformation will consist of the following three main phases.
1. Syntactical Analysis Phase. After proper preprocessing, the first component will
conduct several phases of syntactic analysis to extract structural information from each
of the mathematical statements. In addition to texts, we will attempt to extract the
logical information embedded in formulas and eventually perhaps also in diagrams. As
proofs may omit details at advanced level, cross-document analysis between elementary
materials and advanced materials can be adopted to uncover details that are implicit
in sketchier proofs. The end result of the first component will be a more refined, more
explicit and more controlled natural language (or pseudo programming language) proof
that mimics the proof presentation format as proposed by Lamport [14].
2. Mapping Phase. In the second phase we will first attempt to obtain a crude mapping
by attaching formalized proofs to their corresponding informal proofs that have been
syntactically analyzed in the first phase. The structural similarity between the informal
and formal proofs will then be further exploited to obtain a more refined mapping. We will
then experiment with various methods so as to further refine this mapping until tokenlevel details can be put to correspondence. By the end of this refinement process we will
be able to obtain a collection of semantically-annotated and formally-correct proofs that
have complete information on informal-to-formal correspondence.
3. Generalization phase. The joining of the bridge will be the design of a deep reinforcement learning architecture that can generalize the formalization correspondence from the
-----
Auto-Formalization Proposal for AITP 2018 Qingxiang Wang
above subset of semantically-annotated formally-correct proofs to all the other proofs
that have no formalized counterparts. The neural networks in this architecture may be
affected by overfitting when there is not enough training data, but it is notable that if a
formalization generated from the architecture is formally correct, then this formalization
can be used as training data to further train the neural networks. This positive feedback
loop can continue provided enough mathematical literature is fed into the architecture.
Our auto-formalization infrastructure will be able to generate formalized proofs for humanproved theorems that have never been formally verified. We believe that the main focus of
deep learning applications to theorem proving should be in auto-formalization instead of proof
automation. Such a formalization infrastructure, if fully developed, can significantly increase
the amount of formalized mathematics and help to ensure the quality of research works done
by contemporary mathematicians.
#### References
[[1] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http:](http://www.deeplearningbook.org)
```
//www.deeplearningbook.org.
```
[2] Kai Arulkumaran, Marc Peter Deisenroth, Miles Brundage, and Anil Anthony Bharath. A brief
survey of deep reinforcement learning, 2017.
[3] Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation
of gated recurrent neural networks on sequence modeling, 2014.
[4] Geoffrey Irving, Christian Szegedy, Alexander A. Alemi, Niklas E´en, Fran¸cois Chollet, and Josef
Urban. Deepmath - deep sequence models for premise selection. In Advances in Neural Informa_tion Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016,_
_December 5-10, 2016, Barcelona, Spain, pages 2235–2243, 2016._
[5] Sarah M. Loos, Geoffrey Irving, Christian Szegedy, and Cezary Kaliszyk. Deep network guided
proof search. In LPAR-21, 21st International Conference on Logic for Programming, Artificial
_Intelligence and Reasoning, Maun, Botswana, May 7-12, 2017, pages 85–105, 2017._
[6] Daniel Whalen. Holophrasm: a neural automated theorem prover for higher-order logic, 2016.
[7] Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise selection for theorem proving by
deep graph embedding, 2017.
[8] Cezary Kaliszyk, Josef Urban, Jiˇr´ı Vysko¸cil, and Herman Geuvers. Developing corpus-based translation methods between informal and formal mathematics. In Stephen Watt, James Davenport,
Alan Sexton, Petr Sojka, and Josef Urban, editors, Proc. of the 7th Conference on Intelligent
_Computer Mathematics (CICM’14), volume 8543 of LNCS, pages 435–439. Springer Verlag, 2014._
[9] Cezary Kaliszyk, Josef Urban, and Jiˇr´ı Vyskoˇcil. Learning to parse on aligned corpora (rough
diamond). In Christian Urban and Xingyuan Zhang, editors, Proc. 6h Conference on Interactive
_Theorem Proving (ITP’15), volume 9236 of LNCS, pages 227–233. Springer-Verlag, 2015._
[10] Cezary Kaliszyk, Josef Urban, and Jiˇr´ı Vyskoˇcil. Automating formalization by statistical and
semantic parsing of mathematics. In Mauricio Ayala-Rinc´on and C´esar A. Mu˜noz, editors, 8th
_International Conference on Interactive Theorem Proving (ITP 2017), volume 10499 of Lecture_
_Notes in Computer Science, pages 12–27. Springer, 2017._
[11] Thomas Hales, Mark Adams, Gertrud Bauer, Tat Dat Dang, John Harrison, Le Truong Hoang,
Cezary Kaliszyk, Victor Magron, Sean Mclaughlin, Tat Thang Nguyen, Quang Truong Nguyen,
Tobias Nipkow, Steven Obua, Joseph Pleso, Jason Rute, Alexey Solovyev, Thi Hoai An Ta,
Nam Trung Tran, Thi Diep Trieu, Josef Urban, Ky Vu, and Roland Zumkeller. A formal proof of
the Kepler conjecture. Forum of Mathematics, Pi, 5, 2017.
[12] Adam Grabowski, Artur Kornilowicz, and Adam Naumowicz. Mizar in a nutshell. J. Formalized
_Reasoning, 3(2):153–245, 2010._
-----
Auto-Formalization Proposal for AITP 2018 Qingxiang Wang
[13] Cezary Kaliszyk, Josef Urban, and Jir´ı Vyskocil. System description: Statistical parsing of
informalized Mizar formulas. [Submitted, available at http://grid01.ciirc.cvut.cz/~mptp/](http://grid01.ciirc.cvut.cz/~mptp/synasc17sd.pdf)
```
synasc17sd.pdf.
```
[14] Leslie Lamport. How to write a proof. American Mathematical Monthly, 102(7):600–608, 1995.
-----
| [
"Yutaka, Nagashima"
] | 2018-01-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
Diagram Formalization Enhanced Multi-Modal Geometry Problem Solver | Mathematical reasoning remains an ongoing challenge for AI models, especially for geometry problems that require both linguistic and visual signals. As the vision encoders of most MLLMs are trained on natural scenes, they often struggle to understand geometric diagrams, performing no better in geometry problem solving than LLMs that only process text. This limitation is amplified by the lack of effective methods for representing geometric relationships. To address these issues, we introduce the Diagram Formalization Enhanced Geometry Problem Solver (DFE-GPS), a new framework that integrates visual features, geometric formal language, and natural language representations. We propose a novel synthetic data approach and create a large-scale geometric dataset, SynthGeo228K, annotated with both formal and natural language captions, designed to enhance the vision encoder for a better understanding of geometric structures. Our framework improves MLLMs' ability to process geometric diagrams and extends their application to open-ended tasks on the formalgeo7k dataset. | The Diagram Formalization Enhanced Geometry Problem Solver (DFE-GPS) is introduced, a new framework that integrates visual features, geometric formal language, and natural language representations that improves MLLMs' ability to process geometric diagrams and extends their application to open-ended tasks on the formalgeo7k dataset. | [
"Zeren, Zhang",
"Jo-Ku, Cheng",
"Xiaokai, Zhang",
"Jingyang, Deng",
"Na, Zhu",
"Lu, Tian",
"Tuo, Leng",
"Jinwen, Ma",
"Ziran, Qin"
] | 2024-09-06T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.04214 | https://arxiv.org/abs/2409.04214 | https://www.semanticscholar.org/paper/3e8411c573473b6225677930d9c13d577fe43c06 |
|
Diffusion of Thought: Chain-of-Thought Reasoning in Diffusion Language Models | Recently, diffusion models have garnered significant interest in the field of text processing due to their many potential advantages compared to conventional autoregressive models.In this work, we propose Diffusion-of-Thought (DoT), a novel approach that integrates diffusion models with Chain-of-Thought, a well-established technique for improving the reasoning ability of autoregressive language models. In contrast to autoregressive language models that make decisions in a left-to-right, token-by-token manner, DoT allows reasoning steps to diffuse over time through a diffusion language model and offers greater flexibility in trading-off computation for reasoning performance. Our experimental results demonstrate the effectiveness of DoT in multi-digit multiplication, boolean logic, and grade school math problems. In addition to that, DoT showcases promising self-correction abilities and benefits from existing reasoning-enhancing techniques like self-consistency decoding. Our findings contribute to the understanding and development of reasoning with diffusion language models. | null | null | null | null | NeurIPS 2024 | true | 0 | 0 | null | https://neurips.cc/virtual/2024/poster/95935 | null | null |
Diversity of Thought Improves Reasoning Abilities of LLMs | Large language models (LLMs) are documented to struggle in settings that require complex reasoning. Nevertheless, instructing the model to break down the problem into smaller reasoning steps, or ensembling various generations through modifying decoding steps boosts performance. However, these methods assume that the input prompt is fixed and expect the decoding strategies to introduce the diversity needed for ensembling. In this work, we discuss how one can create and leverage variations of the input prompt as a means of diversity of thought. We propose a method that automatically improves prompt diversity by soliciting feedback from the LLM to ideate approaches that are apt for the problem. We then ensemble the diverse prompts in our method DIVSE (DIVerse reasoning path Self-Ensemble) across multiple inference calls, or use diverse approaches within a single inference call; we call the latter IDIV-SE (In-call DIVerse reasoning path Self-Ensemble). Apart from our approaches outperforming prior work, DIV-SE(in particular) advances state-of-the-art performance on the challenging planning and graph coloring benchmarks. Our results improve the Pareto frontier of the accuracy-cost trade-off. | This work proposes a method that automatically improves prompt diversity by soliciting feedback from the LLM to ideate approaches that are apt for the problem, and improves the Pareto frontier of the accuracy-cost trade-off. | ## Diversity of Thought Improves Reasoning Abilities of LLMs
**Ranjita Naik[†]** **Varun Chandrasekaran**
Microsoft University of Illinois Urbana-Champaign
**Mert Yuksekgonul** **Hamid Palangi** **Besmira Nushi[†]**
Stanford University Microsoft Research Microsoft Research
**Abstract**
Large language models (LLMs) are documented to struggle in settings that require complex reasoning. Nevertheless, instructing the
model to break down the problem into smaller
reasoning steps, or ensembling various generations through modifying decoding steps boosts
performance. However, these methods assume
that the input prompt is fixed and expect the
decoding strategies to introduce the diversity
needed for ensembling. In this work, we discuss how one can create and leverage variations
of the input prompt as a means of diversity of
_thought. We propose a method that automati-_
cally improves prompt diversity by soliciting
feedback from the LLM to ideate approaches
that are apt for the problem. We then ensemble the diverse prompts in our method DIVSE (DIVerse reasoning path Self-Ensemble)
across multiple inference calls, or use diverse
approaches within a single inference call; we
call the latter IDIV-SE (In-call DIVerse reasoning path Self-Ensemble). Apart from our
approaches outperforming prior work, DIV-SE
(in particular) advances state-of-the-art performance on the challenging planning and graph
coloring benchmarks. Our results improve the
Pareto frontier of the accuracy-cost trade-off.
either relies on iterative trial-and-error (White et al.,
2023), or is expensive (Lester et al., 2021).
Previous works identified two simple, yet general prompting principles to enable complex reasoning: (i) Chain-of-Thought (CoT) prompting,
and (ii) ensembling multiple solutions from diverse decoding paths. CoT prompting (Wei
et al., 2022) improves performance by guiding
the LLM to follow step-by-step reasoning. Selfconsistency (SC) (Wang et al., 2023) instead increases the stochasticity by modifying the decoding
process and obtaining multiple completions, which
are then ensembled.
However, combining the two principles raises
limitations. First, inference is significantly more
expensive due to numerous runs, each generating
long completions with many reasoning steps. Next,
it may be impermissible to modify the decoding
process in some settings, such as commercial deployments. Finally, stochasticity-based methods
do not directly guide the diversity at the level of
thought or method, but rather at the token level.
This poses limitations because linguistic token diversity does not always ensure diverse and independent solution approaches.
In this paper, we explore how to explicitly promote the diversity of thought while mitigating the
aforementioned issues. Prior work by Li et al.
(2023) highlights the importance of prompt diversity, but their notion of diversity is captured through
variety in the few-shot examples provided with the
prompt; ours focuses on the reasoning approach.
We first solicit the LLM to produce multiple-highlevel reasoning approaches for problem-solving
(e.g., method of elimination, visualization
techniques etc. for math reasoning problems).
We then leverage GPT-4 to augment few-shot examples used in prior work (Wei et al., 2022) into the
corresponding approaches, whenever applicable.
We propose DIV-SE (DIVerse reasoning path
Self-Ensemble) to extract and aggregate responses
**1** **Introduction**
Large language models (LLMs) exhibit state-ofthe-art performance across a myriad of tasks, but
their effectiveness is strongly influenced by prompt
design (Anil et al., 2023; OpenAI, 2023a; Nori
et al., 2023). For complex reasoning tasks, the
right prompt can enable LLMs to capitalize on task
structure (Guidance, 2024), such as by facilitating
memory (by externalizing thought processes), or
through tractable problem decomposition (Zhou
et al., 2024). However, existing prompt design
2Correspondence to [email protected] and
[email protected].
-----
AQUA, GPT-3.5 Turbo
Blocksworld 4/5, GPT-4
Graph Coloring, GPT-4
DIV-SE-5
100
80
70
65
60
55
50
45
40
72.5
70.0
67.5
65.0
62.5
60.0
57.5
60
40
DIV-SE-3
IDIV-SE-3
SC-10
CoT SC-3 SC-5 [SC-7]
25 50 75 100
Total Inference Cost ($)
20
DIV-SE-5
DIV-SE-3
SC-20
SC-10
IDIV-5
SC-5
CoT
DIV-SE-3
IDIV-SE-3
SC-3 SC-5 SC-7
CoT
5 10 15
Total Inference Cost ($)
2.5 5.0 7.5 10.0
Total Inference Cost ($)
Figure 1: Diversity of Thought enhances the inference cost vs. accuracy trade-off. We compare DIV-SE and
IDIV-SE with SC (Wang et al., 2023) and CoT (Wei et al., 2022) across three benchmarks. The x-axis indicates the
total inference cost (as defined in § 3) on the benchmark using the given method, while the y-axis represents the
LLM’s performance. The few-shot-CoT setting is represented by filled gray dots, while the zero-shot-CoT setting is
indicated by unfilled dots. Notice that for a fixed cost, our approaches always give better performance.
(via majority vote) across multiple inference calls
(§ 2.2). Since distinct approaches introduce diversity at the “thought” level, our methodology results in improved ensemble accuracy. In Fig. 1, we
show that it yields more accurate results across
multiple reasoning benchmarks at a fixed inference cost, without modifying the decoding procedure. For instance, in the BLOCKSWORLD 4/5
task (Valmeekam et al., 2022), DIV-SE improves
the performance by 29.6 percentage points (p.p).
However, this method still leverages multiple inference calls, which could be costly.
To reduce inference costs, we build on the observation that the approaches are often mutually independent, and can be combined in a single prompt
to solicit multiple solutions (Cheng et al., 2023).
Based on this premise, we propose IDIV-SE (Incall DIVerse reasoning path Self-Ensemble; § 2.2),
which combines all approaches within the same
prompt and aggregates all resulting outputs to leverage diversity with a reduced cost. Fig. 1 demonstrates that this method obtains comparable accuracy to DIV-SE and better performance than prior
work with lower inference costs.
We push the pareto frontier of the cost-accuracy
trade-off of prompting strategies across multiple
reasoning tasks (§ 4), outperforming both CoT and
SC prompting on both GPT-3.5 and GPT-4. This
is evident from Fig. 1 for the AQUA-RAT (Ling
et al., 2017), planning (Valmeekam et al., 2023),
and graph coloring (Stechly et al., 2023) benchmarks, where there is a performance improvement
of 16.52, 29.6, and 82.5 p.p respectively. These
improvements, some of which are state-of-the-art,
show the potential of thought diversity to extract
complex reasoning abilities from LLMs that were
impossible to leverage otherwise. We will open
source our code upon publication to encourage further research.
**2** **Diversity through LLM Interactions**
First, we introduce terms and notations that we use
throughout the paper. We use upper case for sets,
lower case for variables, and [n] = {1, · · ·, n}.
**Approach: These are reasoning strategies for prob-**
lem solving, denoted with the variable a. For example, for the GSM8K (Cobbe et al., 2021), a benchmark of grade-school math problems, some of the
(generated) approaches can include a1 =“using
visualizations”, a2 =“working backwards”,
_a3_ =“using direct calculation”, and
_a4 =“method of elimination”._
**Persona: In addition to specifying “how” to solve**
a reasoning problem, specifying a persona can
also influence how the LLM behaves (Salewski
et al., 2023). We denote this with the variable
_p e.g., p1 =“Thinking_ like Alan Turing”,
_p2 =“Thinking like Math Professor” for the_
GSM8K task. Note that both approaches and personas are dependent on the reasoning problem.
**2.1** **Using the LLM as a guide**
Proposed method for creating prompts, which we
term DIVERSEPROMPTING is presented in Algorithm 1. Below, we will describe each step in more
detail. At a high-level, we solicit feedback from
the LLM on how to solve tasks.
**Step 1+2. Extracting Approaches & Personas:**
Note that LLMs trained on internet-scale data encode a significant knowledge from multiple do
-----
A: Michael started with 58 golf balls. After losing 23 on tuesday, from 1:00 PM to 5:00 PM, we can assume it burns for a total of
he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 4 hours. 4 divided by 2 is 2.the candle would be 2 centimeters
golf balls. The answer is 33. shorter after burning from 1:00 PM to 5:00 PM. The answer is 2
Output
A: If a candle melts by 2 centimeters every hour and it burns
from 1:00 PM to 5:00 PM, we can assume it burns for a total of
4 hours. 4 divided by 2 is 2.the candle would be 2 centimeters
shorter after burning from 1:00 PM to 5:00 PM. The answer is 2
centimeters.
Input
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls.
On wednesday, he lost 2 more. How many golf balls did he have
at the end of wednesday?
A: Michael started with 58 golf balls. After losing 23 on tuesday,
he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33
golf balls. The answer is 33.
Q: A candle melts by 2 centimeters every hour that it burns.
How many centimeters shorter will a candle be after burning
from 1:00 PM to 5:00 PM?
**IDIV-SE Prompting**
Input Output
wednesday, he lost 2 more. How many golf balls did he have at the **Approach 1 <using hypothetical numbers> : Assume a starting**
end of wednesday? length for the candle, say 100 centimeters. (…) The candle will be 8
at the end of Wednesday. **Approach 2 <using stepwise reduction>: Let x be the original**
using a diagram with three stages: (…) leaving him with 33 golf **Approach 3 <using rate and time>: The rate of the candle melting**
balls at the end of Wednesday. is 2 centimeters per hour. So, if we multiply the rate by the time it
**IV** **E**
Input
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On
wednesday, he lost 2 more. How many golf balls did he have at the
end of wednesday?
A: Approach 1 <Using direct calculation>: Subtract the total
number of golf balls lost from (…) Therefore, he had 33 golf balls
at the end of Wednesday.
**Approach 2 <Using algebra>: Let x be the number of golf balls**
Michael had (…) we find that Michael had 33 golf balls at the end
of Wednesday.
**Approach 3 <Using visualization>: Represent the problem**
using a diagram with three stages: (…) leaving him with 33 golf
balls at the end of Wednesday.
Q: A candle melts by 2 centimeters every hour that it burns. How
many centimeters shorter will a candle be after burning from 1:00
PM to 5:00 PM?
Output
**Approach 1 <using hypothetical numbers> : Assume a starting**
length for the candle, say 100 centimeters. (…) The candle will be 8
centimeters shorter. The answer is 8 centimeters.
**Approach 2 <using stepwise reduction>: Let x be the original**
length of the candle. After 4 hours(…) the candle will be 2
centimeters shorter at the end of 4 hours. The answer is 2
centimeters.
**Approach 3 <using rate and time>: The rate of the candle melting**
is 2 centimeters per hour. So, if we multiply the rate by the time it
burns, we (…)At 5:00 PM, the candle is 8 centimeters shorter than
its original length. The answer is 8 centimeters.
Figure 2: Diversity of Thought. This illustration depicts CoT and IDIV-SE prompting strategies. Notice that both
have a single example. However, IDIV-SE presents more diversity in terms of reasoning paths. This enables it to
generate diverse completions, yielding more accurate responses.
mains (Liang et al., 2022; Bubeck et al., 2023).
While LLMs may not be perfect at solving reasoning tasks, we hypothesize that they are helpful in
providing high-quality intermediate feedback.
To extract approaches, we utilize the following
methodology: (i) Randomly picking a question
from the reasoning dataset D we want to evaluate;
and (ii) Creating an instruction prompt where we
ask the LLM to generate the names of b ∈ [1, 5]
_approaches to solve the aforementioned question_
conforming to a predefined template (for easier
post-processing). Refer to Figure 5 for an example
of the prompt used.
We extract the part of the response that is compliant with the template and store it. We repeat
this process c times (obtaining of c · b candidate approaches), and pick the n most frequent approaches
to store in set A[1]. This process is abstracted as
method det_approaches(.).
One can repeat the above process used to extract
relevant personas for a given reasoning task. However, we followed a simpler route and asked the
model directly for relevant personas for a given
1In practice, we set c = 100, b = 5, n ∈{3, 5}, and
_|V | < 20._
task and then included them in the set of m candidate personas P used. This is abstracted as method
det_personas(.). Note that no persona (ϕ) is
also part of the persona set.
**Step 3. Choosing the Best Persona, Approach**
**Pairs: The choice of persona and approaches intro-**
duces a principled way to promote diversity.
If the set of personas is P, and the set of approaches is A, the Cartesian product of P and A
yields the total number of prompts. In practice, for
each combination (denoted by si) of persona and
approach, we evaluate the prompt formed using
the composition on a small validation set V [1] and
choose the best performing “size” elements on the
given task[2].
**Step 4. Augmenting few-shot examples: Once**
the (subset of) approach and persona pairs are fixed,
we ask the LLM to augment existing few-shot examples (denoted F = _f1,_ ) with the given set
_{_ _· · · }_
of approaches. Specifically, we take the few-shot
examples provided by Wei et al. (2022), and ask
the LLM to solve them in the style of a chosen
approach and persona pair (Fig.8); we term the
2For a given reasoning task, we perform this process
once (for GPT-3.5 Turbo), and re-use our selection across
all LLMs we evaluate.
-----
DIVERSEPROMPTING: Prompt creation.
**procedure DIVERSEPROMPT(size, type, F, D, V )**
_▷_ **Step 1: Identify different approaches to be used.**
_A =_ _a1, . . ., an_ det_approaches(D)
_{_ _} ←_
_▷_ where A is the set of approaches
_▷_ **Step 2: Identify different personas to be used.**
_P =_ _ϕ, p1, . . ., pm_ det_personas(D)
_{_ _} ←_
_▷_ where P is the set of personas
_▷_ **Step 3: Find the best combination.**
_S =_ _s1, . . . ssize_ combine(A, P, size, V )
_{_ _} ←_
_▷_ where S is the set of combined approaches and personas, and si = (p, ai ∈ _A)_
_▷_ **Step 4: Augment the few-shot examples.**
_T =_ _T[˜]i,j, . . ._ augment(S, F )
_{_ _} ←_
_▷_ where T is the set of augmented examples, and _T[˜]i,j_
is formed using si ∈ _S and fj ∈_ _F_ ; |T _| =size_
aggregation of multiple responses) requires multiple inference calls, which can be costly.
**Candidate 2. IDIV-SE: To further reduce the**
inference costs while promoting diversity, we propose IDIV-SE (In-call DIVerse reasoning path SelfEnsemble). In IDIV-SE, the final prompt is a com_position of all approach and persona pairs and_
_corresponding augmented few-shot examples, and_
_the question to be solved. An example is presented_
in Fig. 2 (bottom left). More examples of prompts
are presented in the appendix in Fig. 9 through 16.
This noticeably decreases the number of calls to be
made, since all few-shot examples are presented
within the same prompt. We note that there might
be error propagation due to the autoregressive nature of models. We evaluate this in detail in § 4.3.
**Practicality.** Crucially, DIVERSEPROMPTING
finds approaches that are general and reusable
across similar reasoning problems. We reused the
strategies identified for solving AQUA-RAT and
Planning benchmark respectively in the MATH
(counting and probability) and Graph Coloring
benchmarks. This also reduces the cost of repeated
evaluation on a separate evaluation set.
**Aggregation. We aggregate the responses via ma-**
jority vote for both prompting strategies. Other
aggregation strategies can also be leveraged, such
as utilizing the LLM itself to aggregate responses
or weighted aggregation. In § 4.4, we consider
an aggregation strategy proposed by Yoran et al.
(2023) and describe how compatible it is with our
prompting approaches.
**3** **Experiments**
We consider the following reasoning benchmarks.
_Arithmetic Reasoning:_ We use: (i) AQUARAT (Ling et al., 2017), a suite of algebraic word
problems, (ii) GSM8K (Cobbe et al., 2021), a
benchmark of grade-school math problems described in natural language (involving elementary
arithmetic operations), and (iii) MATH (Counting
and Probability) (Hendrycks et al., 2021), a collection of math problems from which we choose only
counting and probability as these are not covered
by GSM8K and AQUA-RAT. For all datasets, we
use the test split for evaluation, containing 254,
1319, and 474 questions respectively.
_Planning Capabilities: We use the Blocksworld_
Planning benchmark proposed in Valmeekam et al.
(2022, 2023). The benchmark has two datasets:
one involves 3 blocks (BLOCKSWORLD 3, 100
_▷_ **Step 5: Compose the final prompt.**
_O ←_ compose(T, S, type)
**return O**
_▷_ Return the final output.
**end procedure**
output augmented few-shot examples. This is abstracted in method augment(.), where _T[˜]i,j is the_
set of augmented few-shot examples corresponding
to the approach and persona pair from si and example fj. An example is visualized in the bottom
left of Fig. 2, where the prompt contains different
approaches for solving a math problem.
**2.2** **Designing the Prompts**
**Step 5. Prompt Composition: We create prompts**
for our approach using the best approach and persona pairs identified in step 3, and augmented fewshot examples from step 4 as shown in Fig. 2 and 4.
We now describe two techniques to generate
prompts with the augmented demonstrations (T )
that have been accumulated.
**Candidate 1. DIV-SE: We first propose DIV-SE**
(DIVerse reasoning path Self-Ensemble), a method
to execute the diverse set of approaches in different
inference calls and aggregate their solutions. Apart
from the question to be solved and the augmented
few-shot examples, the final prompt contains a persona, approach, and additional instructions. One
example is visualized in Fig. 4 (please refer to
appendix for more examples of prompts: Fig. 9
through 16). Diversity is ensured through running
_inference with multiple prompts, each with a dif-_
_ferent approach and persona pairs and augmented_
_few-shot examples. However, since the approaches_
are executed separately, generating a solution (via
-----
Blocksworld 3, GPT-4
Blocksworld 3, GPT-4
GSM8K, GPT-3.5 Turbo
SC-10
95
90
95
90
85
80
75
70
65
DIV-SE-5
88
86
85
80
84
82
|Blo|ockswor|rld 3, G|GPT-4|D|
|---|---|---|---|---|
|||DIV|-SE-3|D|
||||||
|ID|IV-SE-3||||
||||||
|CoT|S|SC C-5|SC -7|-|
||SC-3||||
DIV-SE-3
IDIV-SE-3
SC-10
SC-7
CoT SC-5
SC-3
5 10 15 20
Total Inference Cost ($)
SC-10
|D DIV|IV-SE-5 -SE-3|Col3|Col4|
|---|---|---|---|
|||||
|IDIV-S|E-3|||
|||||
|CoT|SC-3|SC-5 S|C-7|
|||||
DIV-SE-5
DIV-SE-3
IDIV-SE-3
SC-5 SC-7
SC-3
CoT
25 50 75 100
Total Inference Cost ($)
75
70
|Col1|Col2|DIV-S|E-5|
|---|---|---|---|
||SC- DIV-SE|5 SC-7 -3|S|
|||||
|IDI|SC-3 V-SE-5|||
|CoT||||
DIV-SE-5
SC-5 SC-7
DIV-SE-3
SC-3
IDIV-SE-5
CoT
10 20 30
Total Inference Cost ($)
Figure 3: Diversity of Thought enhances the inference cost and accuracy trade-off. We compare DIV-SE and
IDIV-SE with SC (Wang et al., 2023) and CoT (Wei et al., 2022) across three benchmarks. The x-axis indicates the
total cost (as defined in § 3) of running inference with the LLM on the benchmark using the given method, while the
y-axis represents the LLM’s performance. The FS-CoT setting is represented by filled gray dots, while the ZS-CoT
setting is indicated by unfilled dots. Notice that for BLOCKSWORLD 3, despite being in the ZS-CoT setting, our
approaches are more performant than the SC-s (FS-CoT) baseline.
instances), while the other dataset involves 4 or 5
blocks (BLOCKSWORLD 4/5, 500 instances).
_Constraint Satisfaction Optimization: We use the_
GRAPH COLORING benchmark (Stechly et al.,
2023) containing 100 examples to test reasoning
for constraint satisfaction. Commonsense Reason_ing: We use COMMONSENSEQA (Talmor et al.,_
2019) which consists of generic multiple-choice
questions elicited for testing common sense reasoning. We use the validation split containing 1,221
questions.
**Language Models.** We evaluate our proposed
methods on both GPT-3.5 Turbo (OpenAI, 2022)
and GPT-4 (OpenAI, 2023b). We also conduct an
additional evaluation on LLaMA-2 70B (Touvron
et al., 2023) to explore the performance of our technique on open-source LLMs. For the latter, we use
meta-llama/Llama-2-70b-chat-hf through the
Transformers library (Wolf et al., 2019).
**Baselines.** We consider Chain-of-Thought
(CoT) (Wei et al., 2022) and Self-Consistency
(SC ) (Wang et al., 2023) as our baselines. For CoT,
we consider two settings: zero-shot (ZS) CoT (Kojima et al., 2022) (i.e., “Think step by step”
is added to the prompt), and few-shot (FS) CoT
(i.e., CoT with demonstrations). In our SC runs,
we set the temperature T = 0.7 without top-k
truncation and sample up to s ∈ [1, 10] outputs
(denoted SC-s). For all other approaches, we set
_T = 0. We use ensembles of size 5 in IDIV-SE_
and DIV-SE for GSM8K and AQUA-RAT. For
the planning, GRAPH COLORING, and COMMON
SENSEQA benchmarks, we use a size of 3.
**Performance Metrics. We measure the accuracy**
on the task, and the generation inference cost.To
measure the cost, we assume 1000 tokens are about
750 words[3]. For GPT-4 (8K) the input and output
prices used to estimate inference cost are $0.03/1k
tokens and $0.06/1k tokens, respectively. For GPT
3.5 Turbo (16K), the input and output prices used
in the cost estimation are $0.003/1k (tokens) and
$0.004/1k (tokens) respectively.
**Results Summary. include: Across most bench-**
marks we consider, our techniques provide substantial performance gains (e.g., 16.52, 82.5, and
14.3 p.p improvements for AQUA-RAT, GRAPH
COLORING, and MATH respectively). They are
also Pareto optimal (in terms of the utility vs. cost
trade-off). For the challenging planning benchmark
(BLOCKSWORLD 4/5), our techniques improve accuracy by 29.6 p.p achieving state-of-the-art performance. Using GPT-4 for BLOCKSWORLD 3, our
approach (in the ZS-CoT setting) is substantially
more effective than SC-10 (in the FS-CoT setting)
at 4× lower cost (Figure 3 (center figure)).
Since prompts are chained together in IDIV-SE,
error propagation is possible. Our evaluation on
AQUA-RAT in § 4.3 suggests that even though
error propagation is estimated as less than 6.5% for
both models, these rates are comparable to differences in performance between DIV-SE and IDIVSE. When combined with aggregation approaches
that are capable of reasoning across the diverse
generations (Yoran et al., 2023), we observe additional performance gains as shown in § 4.4. For
the AQUA-RAT benchmark for instance, we see
an accuracy of 67.7% for GPT-3.5 (3.23 p.p improvement to majority voting).
3https://openai.com/pricing
-----
accuracy while maintaining low costs.
**4.1.2** **Counting and probabilistic reasoning**
**via MATH**
_GPT-4 Results: From Table 2, we see that DIV-SE_
achieves an accuracy increase of 14.3 and 16.87 p.p
in the FS-CoT (baseline of 66.46%) and ZS-CoT
(baseline of 62.24%) settings, respectively. On
the other hand, IDIV-SE achieves a boost of 5.54
and 9.76 p.p in the FS-CoT and ZS-CoT settings,
respectively, over the baseline.
_GPT-3.5 Results: From Table 2, we see that DIV-_
SE yields a gain of 21.84 and 13.04 p.p in the FSCoT (baseline of 30.38%) and ZS-CoT (baseline
of 31.90%) settings, respectively. Likewise IDIVSE achieves a boost of 13.72 and 10.60 p.p in the
FS-CoT and ZS-CoT settings, respectively.
**4.1.3** **Planning via BLOCKSWORLD**
_Setup: The benchmark provides both natural lan-_
guage and Planning Definition and Domain Language prompts (McDermott et al., 1998). We
use natural language prompts in all the experiments. For the baseline runs, we introduce minor alterations to the prompt originally proposed
by Valmeekam et al. (2023). These changes involve incorporating an explicit directive to prevent
under-block movement and resolving minor language ambiguities we observed to be problematic
during initial investigation. Furthermore, we reposition the initial condition and goal state information to the beginning of the prompt. The modified
improved prompt is presented in Fig. 9.
We aggregate the plans through majority voting
and utilize string matching for comparing the plans.
As a result, we optimize the plan by eliminating
the redundant “no-op” steps.
_GPT-4 Results: We note that GPT-4 performs_
slightly better in a ZS setting, and use this to run
all experiments. From Fig. 1, we observe that for
BLOCKSWORLD 3, ZS-CoT records an accuracy
of 70%, while SC-10 reaches an accuracy level of
73%. IDIV-SE enhances the absolute accuracy by
12 p.p above the ZS-CoT baseline, while DIV-SE
produces an impressive state-of-the-art accuracy of
94%. An analysis of the six unsuccessful instances
suggests the capacity for further performance improvement by increasing the size of the ensemble,
as already two out of five current approaches generate accurate plans. For the BLOCKSWORLD 4/5
case, the ZS-CoT accuracy is 40%, while SC-10
has an accuracy of 41.2%. Here, IDIV-SE results
**Method** **Graph Coloring** **BW 3** **BW 4/5**
CoT 15.0 70.00 40.00
SC-3 18.0 66.00 38.20
SC-5 20.0 70.00 38.40
SC-7 22.0 72.00 40.00
SC-10 23.0 73.00 41.20
IDIV-SE 74.00 82.00 57.00
DIV-SE **97.00** **94.00** **69.60**
Table 1: Performance on GRAPH COLORING and
BLOCKSWORLD planning for GPT-4 in the ZS-CoT setting. We compare DIV-SE and IDIV-SE with SC (Wang
et al., 2023) and CoT (Wei et al., 2022).
**4** **Results**
**4.1** **Main Results**
We present the summary of results in Table 1 and
2. Detailed results are available in Appendix C.
These also cover results on the impact of ensemble
size in Appendix D.
**Setting** **Method** **AQuA** **MATH** **CQA**
CoT 59.00 31.90 71.40
SC-3 61.40 32.07 72.00
GPT-3.5 ZS SC-5 63.37 38.19 72.80
IDIV-SE 62.60 42.50 74.00
DIV-SE **72.83** **44.94** **74.50**
CoT 57.48 30.38 79.4
GPT-3.5 FS IDIV-SE 64.57 44.10 80.00
DIV-SE **72.84** **52.22** **80.40**
CoT 70.47 62.24 81.60
GPT-4 ZS IDIV-SE 71.65 72.00 **82.50**
DIV-SE **80.31** **79.11** 81.70
CoT 71.90 66.46 87.70
GPT-4 FS IDIV-SE 79.90 72.00 **89.00**
DIV-SE **84.25** **80.76** 88.00
Table 2: Performance on AQUA-RAT, MATH (Counting and Probability), and COMMONSENSEQA for GPT3.5 Turbo and GPT-4 in the ZS-CoT and few-shot-CoT
settings respectively.
**4.1.1** **Arithmetic reasoning via AQUA-RAT**
_GPT-4 Results: In Table 2, we observe that DIV-SE_
achieves an accuracy increase of 9.84 and 14.6 p.p
in the FS-CoT (baseline accuracy of 71.9%) and
ZS-CoT (baseline of 70.47%) settings, respectively.
While the gains from IDIV-SE are nominal in ZSCoT, it achieves a boost of 7.7 p.p for FS-CoT.
_GPT-3.5 Results: In Table 2, we see that DIV-SE_
yields a gain of 14.23 and 16.52 p.p in the FSCoT (baseline of 57.48%) and ZS-CoT (baseline
of 59%) settings, respectively. Within the FS-CoT
setting, IDIV-SE gets an absolute increase of 7 p.p.
Note that Fig. 1 also displays the total inference
cost. Both IDIV-SE and DIV-SE are Pareto opti_mal, indicating their capacity to achieve a higher_
-----
in an absolute gain of 17 p.p above the ZS-CoT
baseline, and DIV-SE too enhances performance,
leading to 69.6%. As outlined in Fig. 1 and 3, both
IDIV-SE and DIV-SE achieve Pareto optimality.
_GPT-3.5 Results: The baseline performance on_
BLOCKSWORLD 3 is 6%, and on BLOCKSWORLD
4/5 is 0.6%. We do not see any additional improvement using both IDIV-SE and DIV-SE. Qualitatively, we observe that during plan generation,
GPT-3.5 fails to follow the restrictions provided as
part of the problem instructions too often, leading
to either infeasible or incorrect plans. This shows
instruction following capabilities are crucial to the
success of the methods proposed here.
**4.1.4** **Constraint Satisfaction via GRAPH**
**COLORING**
There may exist numerous non-optimal yet valid
colorings for a given graph. Since exact string
matching is not usable for identifying the majority
solution from the ensembles of IDIV-SE and DIVSE, we employ the external, sound verifier (Stechly
et al., 2023) to pick the correct solution.
_GPT-4 Results: From Fig. 1, it is observed that_
ZS-CoT achieves an accuracy of 15%, whereas
SC-10 attains an accuracy level of 23%. IDIV-SE
improves the absolute accuracy by 59 p.p above
the ZS-CoT baseline. Remarkably, DIV-SE delivers a state-of-the-art accuracy of 97%. Given that
GPT-4’s performance plateaus in the ZS setting, we
chose to omit conducting the few-shot experiments.
**Summary: Methods in this work often demon-**
strate state-of-the-art performance on reasoning
tasks. This is most significant in the planning
and constraint satisfaction benchmarks, where the
corresponding authors claimed immense difficulty
for existing LLMs. Our work shows that statusquo prompt design approaches including chain of
thought are too generic for these problems, and
prompt customization (via DIVERSEPROMPTING)
can yield substantial gains by guiding the chain of
thought to the general nature of the problem.
**4.2** **Open Source Models**
Due to the limited computational budget, we
only performed experiments with the AQUA-RAT
benchmark. Please refer to Appendix B for further details. Table 3 demonstrates the results for
LLaMA-2 70B with 8-bit quantization. DIV-SE
and IDIV-SE demonstrate an improvement of over
10 p.p over the baseline in the FS-CoT settings.
However, the gain in the ZS-CoT setting has been
negligible. We hypothesize that this is partly due to
model’s lack of capabilities to both follow instructions and the mentioned approach in the absence of
examples.
**Prompting Strategy** **ZS-CoT (%)** **FS-CoT (%)**
CoT 31.32 29.1
IDIV-SE 27.00 39.7
DIV-SE **32.00** **39.9**
Table 3: Results on AQUA-RAT and LLaMA-2 70B.
**4.3** **Errors & Prompt Utility**
_Error Propagation: Due to the autoregressive na-_
ture of LLM decoding, early incorrect answers in
IDIV-SE may get propagated to the latter ones.
To quantify this, we select examples where the
solution is incorrect and all five approaches produce the same erroneous answer. We focus only
on these cases to see if e.g., a wrong conclusion
in the initial approaches leaks into the following
ones. Next, we attempt the last two approaches
again in a separate session: if the LLM generates
the same outcomes as in the original session (i.e.,
IDIV-SE setup) within 3 attempts, we consider it
as no error propagation. However, if it does not
produce the same answer within the 3 attempts, we
interpret this as a case of error propagation since
the change in answer could be attributed to the initial approaches with wrong answers in the chain.
We measure this phenomenon on AQUA-RAT (FSCoT) on both GPT-4 and GPT-3.5. We find that
GPT-4 and GPT-3.5 have error propagation rates
of 6.2% and 5.5% respectively, which are comparable to performance differences between DIV-SE
and IDIV-SE, making error propagation one of the
_main explanatory hypotheses for the differences be-_
_tween the two methods. Reducing these error rates_
remains a challenging problem given the autoregressive nature of current LLMs.
_Beyond Thinking Step by Step: The diverse ap-_
proaches and personas we utilize not only enhance
the performance in IDIV-SE and IDIV-SE, but are
also independently superior to ZS-CoT. Table 4
highlights this effect, which showcases the importance of conditioning the model for solutions via
DIVERSEPROMPTING.
**4.4** **Alternative Aggregation Strategies**
Our aggregation thus far relies on majority voting.
Alternatively, we can also utilize the meta reasoning technique proposed by Yoran et al. (2023) to
-----
**Dataset, Model** **Persona, Approach** **Accuracy (%)**
_∅,Think step by step_ 57.48
AQUA-RAT, GPT-3.5 _∅, Using Algebra_ 60.24 (+2.76)
Thinking like Alan Turing, ∅ 61.81 (+4.33)
Dr. Patel: A renowned mathematician, ∅ **65.75 (+8.27)**
_∅, State tracking prompt (Valmeekam et al., 2022)_ 42.00
BLOCKSWORLD 4/5, GPT-4 _∅, Finite State Machine_ 55.80 (+13.80)
Alan Turing, Action Rationale 57.80 (+15.80)
Alan Turing, Progressive Block Placement Approach **58.80 (+16.80)**
Table 4: Prompts, derived from approaches and personas, boost performance. Blue rows denote ZS-CoT
prompts, while black lines denote FS-CoT prompts. ∅ denotes absence (of persona or approach respectively).
reasoning process as a program, which is then delegated to an external tool. In our work, we neither
change the decoding process nor assume the existence of trusted tools. This makes our solution
directly applicable to black-box models.
**Prompting Strategies: Brown et al. (2020) note**
that demonstrations to prompts, encoded as inputoutput pairs, produce drastic performance increase
in larger LLMs. Wei et al. (2022) encourage internal dialogue by forcing the LLM to generate a
sequence of intermediate steps for reasoning problems. This improves reasoning performance on
larger LLMs (Nye et al., 2021; Chung et al., 2022;
Kojima et al., 2022). Zhou et al. (2022) automatically break a complex problem into simpler subproblems and then solve them in sequence. Across
all these techniques, the common practice is to keep
the prompts fixed, but aggregate responses across
multiple trials by varying the temperature. In our
work, we vary the input prompt itself. A work that
is similar in spirit is that of Yoran et al. (2023),
which instead of aggregating the response of multiple reasoning paths, forces the model to reason
across them before aggregation. Another relevant
work is that of Li et al. (2023), which shows the
importance of prompt diversity. However, they rely
on selecting few-shot demonstrations from a holdout set (which defines diversity in their method),
without explicitly stating reasoning pathways.
**6** **Conclusions**
In this work, we promoted diversity of thought
as a principled prompting strategy and proposed
methodologies that leverage the LLM as a guide to
design a diverse set of approaches to solve complex
reasoning tasks. Extracting solution approaches
from LLMs themselves becomes a discovery mechanism that seeds and conditions generative solutions. Reported results on a variety of tasks confirm
that there is a large space for improvement in com
**Method** **GPT-4 (%)** **GPT-3.5 (%)**
Majority Voting **79.90** 64.47
Meta Reasoning 79.24 **67.70**
Table 5: Alternative aggregation strategies. Observe
that, for the AQUA-RAT benchmark (FS-CoT), IDIVSE produces more accurate results only with GPT-3.5.
accumulate the results and exploit the rich information present in the reasoning steps. To this end,
we store the responses generated by IDIV-SE, and
request the model to meta reason over them in a different prompt and session. Table 5 suggests that the
proposed reasoning paths contain rich information
that is effectively exploited by the meta reasoning
aggregation. Future post-hoc techniques may consider to learn the accuracy of the diverse prompting
approaches, and weigh them accordingly. Nevertheless, the fact that techniques presented here
provide visible improvements even with simple approaches like majority voting, demonstrates their
added value independently from different aggregation algorithms.
**5** **Related Work**
**Prompt Optimization: Pryzant et al. (2023) mod-**
els the prompts as optimizable discrete variables,
and minimizes the loss of the reasoning task. Jones
et al. (2023) optimize over the prompt space, but
to identify failure modes. However, optimizationbased approaches often require the task to have a
differentiable loss function, which is a strong condition. In our work, we utilize feedback from the
LLM (not through gradients) during prompt design.
Similarly to Cheng et al. (2023), IDIV-SE batches
the responses for multiple queries within a prompt.
**Decoding Optimizations and Tools: Wang et al.**
(2023) replace the naive greedy decoding by sampling a diverse set of reasoning paths (e.g., through
temperature sampling), and then selects the most
consistent answer. Chen et al. (2022) express the
-----
plex reasoning by uncovering the necessary skills
and knowledge from LLMs through targeted and
diverse prompting methods. These results demonstrated how promoting diversity can improve the
Pareto frontier of accuracy-cost trade-off for current LLMs and yield state-of-the-art solutions for
planning and mathematical reasoning tasks. We
hope that future work will expand these results to
complex tasks from other real-world applications.
**7** **Limitations**
Our study mainly experimented with GPT-3.5
and GPT-4 models because of their instructionfollowing capabilities. While current open-source
models have shown remarkable improvements to
this end, they are still not able to reliably follow
instructions relevant to complex reasoning tasks
(e.g. state tracking, plan validity, constraint satisfaction). We hope that progress in the field will
enable further experimentation in this direction.
In addition, we also observe that error propagation during autoregressive generation may sometimes negatively impact the performance of IDIVSE, where all approaches are executed in order
within the same prompt. Some of this could be
addressed by explicitly instructing the model to
forget about the previous solution but ultimately
as long as previous generation history remains in
context and short-term memory, error propagation
risks may still need to be tracked and measured.
**References**
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak
Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng
Chen, et al. 2023. Palm 2 technical report. arXiv
_preprint arXiv:2305.10403._
Tom Brown, Benjamin Mann, Nick Ryder, Melanie
Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind
Neelakantan, Pranav Shyam, Girish Sastry, Amanda
Askell, Sandhini Agarwal, Ariel Herbert-Voss,
Gretchen Krueger, Tom Henighan, Rewon Child,
Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens
Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack
Clark, Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei. 2020.
[Language models are few-shot learners.](https://proceedings.neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf) In Ad_vances in Neural Information Processing Systems,_
volume 33, pages 1877–1901. Curran Associates Inc.
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar,
Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint
_arXiv:2303.12712._
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W. Cohen. 2022. [Program of thoughts](http://arxiv.org/abs/2211.12588)
[prompting: Disentangling computation from reason-](http://arxiv.org/abs/2211.12588)
[ing for numerical reasoning tasks.](http://arxiv.org/abs/2211.12588)
[Zhoujun Cheng, Jungo Kasai, and Tao Yu. 2023. Batch](http://arxiv.org/abs/2301.08721)
[prompting: Efficient inference with large language](http://arxiv.org/abs/2301.08721)
[model apis.](http://arxiv.org/abs/2301.08721)
H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay,
W. Fedus, E. Li, X. Wang, M. Dehghani, and
S. Brahma. 2022. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems.](http://arxiv.org/abs/2110.14168)
Tim Dettmers, Mike Lewis, Younes Belkada, and Luke
Zettlemoyer. 2022a. Llm. int8 (): 8-bit matrix multiplication for transformers at scale. arXiv preprint
_arXiv:2208.07339._
Tim Dettmers, Mike Lewis, Sam Shleifer, and Luke
Zettlemoyer. 2022b. 8-bit optimizers via block-wise
quantization. 9th International Conference on Learn_ing Representations, ICLR._
Guidance. 2024. A guidance language for control[ling large language models. https://github.com/](https://github.com/guidance-ai/guidance)
[guidance-ai/guidance.](https://github.com/guidance-ai/guidance)
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul
Arora, Steven Basart, Eric Tang, Dawn Song, and
[Jacob Steinhardt. 2021. Measuring mathematical](http://arxiv.org/abs/2103.03874)
[problem solving with the math dataset.](http://arxiv.org/abs/2103.03874)
Erik Jones, Anca Dragan, Aditi Raghunathan, and Jacob Steinhardt. 2023. Automatically auditing large
language models via discrete optimization. arXiv
_preprint arXiv:2303.04381._
T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa. 2022. Large language models are zero-shot
reasoners. In Advances in Neural Information Pro_cessing Systems._
Brian Lester, Rami Al-Rfou, and Noah Constant. 2021.
The power of scale for parameter-efficient prompt
tuning. arXiv preprint arXiv:2104.08691.
Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen,
Jian-Guang Lou, and Weizhu Chen. 2023. Making
language models better reasoners with step-aware
verifier. In Proceedings of the 61st Annual Meet_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 5315–5333._
-----
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris
Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian
Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language
models. arXiv preprint arXiv:2211.09110.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blun[som. 2017. Program induction by rationale genera-](https://doi.org/10.18653/v1/P17-1015)
[tion: Learning to solve and explain algebraic word](https://doi.org/10.18653/v1/P17-1015)
[problems. In Proceedings of the 55th Annual Meet-](https://doi.org/10.18653/v1/P17-1015)
_ing of the Association for Computational Linguistics_
_(Volume 1: Long Papers), pages 158–167, Vancouver,_
Canada. Association for Computational Linguistics.
Drew McDermott, Malik Ghallab, Adele E Howe,
Craig A Knoblock, Ashwin Ram, Manuela M Veloso,
Daniel S Weld, and David E Wilkins. 1998. Pddl-the
planning domain definition language.
Harsha Nori, Yin Tat Lee, Sheng Zhang, Dean Carignan, Richard Edgar, Nicolo Fusi, Nicholas King,
Jonathan Larson, Yuanzhi Li, Weishung Liu, et al.
2023. Can generalist foundation models outcompete special-purpose tuning? case study in medicine.
_arXiv preprint arXiv:2311.16452._
Michael Nye, Anders J Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David
Dohan, Aitor Lewkowycz, Marten Bosma, Daan
Luan, et al. 2021. Show your work: Scratchpads
for intermediate computation with language models.
_arXiv preprint arXiv:2112.00114._
[OpenAI. 2022. Introducing chatgpt.](https://openai.com/blog/chatgpt/)
[OpenAI. 2023a. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
[OpenAI. 2023b. Gpt-4 technical report.](https://arxiv.org/abs/2303.08774)
Reid Pryzant, Dan Iter, Jerry Li, Yin Tat Lee, Chenguang Zhu, and Michael Zeng. 2023. Automatic
prompt optimization with" gradient descent" and
beam search. arXiv preprint arXiv:2305.03495.
Leonard Salewski, Stephan Alaniz, Isabel Rio-Torto,
Eric Schulz, and Zeynep Akata. 2023. In-context impersonation reveals large language models’ strengths
and biases. arXiv preprint arXiv:2305.14930.
Kaya Stechly, Matthew Marquez, and Subbarao Kamb[hampati. 2023. Gpt-4 doesn’t know it’s wrong: An](https://arxiv.org/abs/2310.12397)
[analysis of iterative prompting for reasoning prob-](https://arxiv.org/abs/2310.12397)
[lems. arXiv preprint arXiv:2310.12397.](https://arxiv.org/abs/2310.12397)
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and
[Jonathan Berant. 2019. CommonsenseQA: A ques-](https://doi.org/10.18653/v1/N19-1421)
[tion answering challenge targeting commonsense](https://doi.org/10.18653/v1/N19-1421)
[knowledge. In Proceedings of the 2019 Conference](https://doi.org/10.18653/v1/N19-1421)
_of the North American Chapter of the Association for_
_Computational Linguistics: Human Language Tech-_
_nologies, Volume 1 (Long and Short Papers), pages_
4149–4158, Minneapolis, Minnesota. Association for
Computational Linguistics.
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay
Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti
Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint_
_arXiv:2307.09288._
Karthik Valmeekam, Matthew Marquez, Sarath Sreed[haran, and Subbarao Kambhampati. 2023. On the](http://arxiv.org/abs/2305.15771)
[planning abilities of large language models – a criti-](http://arxiv.org/abs/2305.15771)
[cal investigation.](http://arxiv.org/abs/2305.15771)
Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan,
and Subbarao Kambhampati. 2022. Large language
models still can’t plan (a benchmark for llms on planning and reasoning about change). arXiv preprint
_arXiv:2206.10498._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
[Denny Zhou. 2023. Self-consistency improves chain](http://arxiv.org/abs/2203.11171)
[of thought reasoning in language models.](http://arxiv.org/abs/2203.11171)
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and
[Denny Zhou. 2022. Chain of thought prompting](https://arxiv.org/pdf/2201.11903)
[elicits reasoning in large language models. In Con-](https://arxiv.org/pdf/2201.11903)
_ference on Neural Information Processing Systems_
_(NeurIPS)._
Jules White, Quchen Fu, Sam Hays, Michael Sandborn,
Carlos Olea, Henry Gilbert, Ashraf Elnashar, Jesse
Spencer-Smith, and Douglas C Schmidt. 2023. A
prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz,
et al. 2019. Huggingface’s transformers: State-ofthe-art natural language processing. arXiv preprint
_arXiv:1910.03771._
Ori Yoran, Tomer Wolfson, Ben Bogin, Uri Katz, Daniel
Deutch, and Jonathan Berant. 2023. Answering
questions by meta-reasoning over multiple chains
of thought. arXiv preprint arXiv:2304.13007.
Denny Zhou, Nathanael Schärli, Le Hou, Jason Wei,
Nathan Scales, Xuezhi Wang, Dale Schuurmans,
Claire Cui, Olivier Bousquet, Quoc Le, and Ed Chi.
2022. Least-to-most prompting enables complex reasoning in large language models. _arXiv preprint_
_arXiv:2205.10625._
Pei Zhou, Jay Pujara, Xiang Ren, Xinyun Chen, HengTze Cheng, Quoc V Le, Ed H Chi, Denny Zhou, Swaroop Mishra, and Huaixiu Steven Zheng. 2024. Selfdiscover: Large language models self-compose reasoning structures. arXiv preprint arXiv:2402.03620.
-----
**Appendix**
**A** **Prompt used for DIVERSEPROMPTING**
Our diverse prompting strategy for IDIV-SE and
DIV-SE is showcased in Fig. 2 and Fig. 4 respectively. The instrumental prompt template that determines our approaches is presented in Fig. 5.
**B** **Model Details**
**B.1** **Open-Source Models**
We perform the Llama-2 70B experiments with
a single 80GB A100 GPU. To fit the 70B model
to a single A100, we use 8-bit precision through
bitsandbytes (Dettmers et al., 2022a,b). Further,
Dettmers et al. (2022a) reports no performance
drop with this quantization method.
As the system prompt, we use You are a
helpful, respectful and honest assistant.
We perform inference with greedy decoding, having temperature T = 0.
**C** **Additional Results**
In this section, we provide additional results on
COMMONSENSEQA and GSM8K benchmarks.
**C.1** **Common sense via COMMONSENSEQA**
Table 2 presents the results of the experiments.
Overall, the improvements in accuracy are relatively modest. This is likely because answering
questions in COMMONSENSEQA does not demand
as much reasoning and thought diversity as is required in some other benchmarks. In addition, the
dataset also contains a number of ambiguous questions, which if read verbatim may have many plausible answers but the ground truth contains only
one answer.
**C.2** **Arithmetic reseasoning via GSM8K**
_GPT-4 Results: As shown in Fig. 6, accuracy on_
GSM8K have nearly plateaued, with the ZS-CoT
and FS-CoT baselines achieving accuracies of 94%
and 95% respectively. IDIV-SE does not produce
any significant gains in either setting. On the other
hand, DIV-SE reaches accuracy of 96.3% in both
FS-CoT and ZS-CoT settings, providing a modest
improvement.
_GPT-3.5 Results: Here, the gains are more substan-_
tial. Compared to the ZS-CoT baseline of 76.11%,
IDIV-SE provides an improvement of 5.31 p.p.
DIV-SE goes a step further, enhancing the accuracy by 10.39 p.p. In the FS-CoT setting, DIV-SE
posts an accuracy improvement of 7.68 p.p (with a
baseline accuracy of 81.4%).
Fig. 3 (rightmost) presents the cost vs. accuracy trade-offs between IDIV-SE, DIV-SE, and SC.
While the performance of SC does improve with
the expansion of reasoning paths, both IDIV-SE
and DIV-SE offer better trade-offs.
**D** **Evaluating Ensemble Sizes**
Figure 6 depicts the average accuracy of different ensemble sizes on GSM8K for both ZS-CoT
and FS-CoT settings, utilizing GPT-4 and GPT-3.5.
Similarly, Figure 7 demonstrates the average accuracy of various ensemble sizes on AquA for both
ZS-CoT and FS-CoT settings, using GPT-4 and
GPT-3.5. It is noteworthy that in both AQuA and
GSM8K, even an ensemble of size three yields significant performance improvements over the baseline, which we attribute to the high diversity and
independence of reasoning paths.
**E** **Prompt Templates**
The following section provides a comprehensive visual representation of the prompts used in our study.
These prompts, depicted in Figures 9 through 16,
were used in different settings and for the planning, AQuA, and graph coloring benchmarks, and
incorporate various personas and approaches.
Figure 9 illustrates the prompt used in the baseline run, Figure 10 demonstrates the prompt employed when applying a Finite State Machine approach, Figure 11 depicts the prompt used when
incorporating the persona of Alan Turing and an
Action Rationale approach, and lastly, Figure 12
shows the prompt used when applying the persona
of Alan Turing and the Progressive Block Placement approach in the zero-shot setting for the planning benchmark.
Figure 13 illustrates the algebraic approach,
while Figure 14 and Figure 15 demonstrate the
prompts incorporating the personas of Alan Turing
and Dr. Patel, a renowned mathematician, respectively, in the few-shot-CoT setting for the AQuA
benchmark.
In Figure 16 we present the IDIV-SE prompt
used in the zero-shot setting for the graph coloring
benchmark.
-----
**DIV-SE Prompting**
Output 1
**<using direct calculation>: The candle burns**
for 4 hours from 1:00 PM to 5:00 PM. (…) The
candle will be 8 centimeters shorter. The answer
is 8 centimeters.
Output 2
**<using visualization>: Here's the visual**
representation:
|-----------------| (original length)
|--------| (1 hour later)
|----| (2 hours later)
(…)At 5:00 PM, the candle is 8 centimeters
shorter than its original length. The answer is 8
centimeters.
Output 3
**<using algebra>: Let x be the original length**
of the candle. After 4 hours(…) the candle will
be 2 centimeters shorter at the end of 4 hours.
The answer is 2 centimeters.
Input 1
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday,
he lost 2 more. How many golf balls did he have at the end of wednesday?
**<Using direct calculation>: Subtract the total number of golf balls lost from**
(…) Therefore, he had 33 golf balls at the end of Wednesday.
Q: A candle melts by 2 centimeters every hour that it burns. How many
centimeters shorter will a candle be after burning from 1:00 PM to 5:00 PM?
Input 2
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday,
he lost 2 more. How many golf balls did he have at the end of wednesday?
**<Using visualization>: Represent the problem using a diagram with three**
stages: (….) leaving him with 33 golf balls at the end of Wednesday.
Q: A candle melts by 2 centimeters every hour that it burns. How many
centimeters shorter will a candle be after burning from 1:00 PM to 5:00 PM
Input 3
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wed,lost 2
more. How many golf balls did he have at the end of wednesday?
**<Using algebra>: Let x be the number of golf balls Michael had …we find**
that Michael had 33 golf balls at the end of Wednesday.
Q: A candle melts by 2 centimeters every hour that it burns. How many
centimeters shorter will a candle be after burning from 1:00 PM to 5:00 PM?
Figure 4: DIV-SE prompting.
Use five distinct approaches to solve the given problem accurately. If there is no exact match choose the closest
option.
Q: {Question}
Use the following output format:
Approach 1 < name of the approach > : < Details of Approach 1 >
Approach 2 < name of the approach > : < Details of Approach 2 >
Approach 3 < name of the approach > : < Details of Approach 3 >
Approach 4 < name of the approach > : < Details of Approach 4 >
Approach 5 < name of the approach > : < Details of Approach 5 >
Figure 5: Prompt template for extracting diverse approaches for problem solving.
-----
|GPT-3.5,|ZS-CoT|Col3|
|---|---|---|
|DIV-SE CoT IDIV-SE-5|||
||||
DIV-SE
CoT
IDIV-SE-5
2 3 4
Ensemble Size
GPT-3.5, FS-CoT
2 3 4
Ensemble Size
GPT-4, ZS-CoT
2 3 4
Ensemble Size
GPT-4, FS-CoT
2 3 4
Ensemble Size
86
84
82
80
78
76
96.2
96.0
95.8
95.6
95.4
95.2
88
86
96.0
95.5
84
82
95.0
94.5
80
94.0
Figure 6: Average accuracy for different ensemble sizes on GSM8K for ZS-CoT and FS-CoT settings on GPT-4
and GPT-3.5. Note that all graphs are zoomed in.
GPT-3.5, ZS-CoT
|DIV-SE CoT IDIV-SE-|5|Col3|
|---|---|---|
||||
DIV-SE
CoT
IDIV-SE-5
2 3 4
Ensemble Size
GPT-3.5, FS-CoT
2 3 4
Ensemble Size
GPT-4, ZS-CoT
2 3 4
Ensemble Size
GPT-4, FS-CoT
2 3 4
Ensemble Size
84
82
80
78
76
74
72
72.5
70.0
67.5
65.0
62.5
60.0
72.5
70.0
67.5
65.0
62.5
60.0
57.5
80
78
76
74
72
70
Figure 7: Average accuracy for different ensemble sizes on AQUA-RAT for ZS-CoT and FS-CoT settings on
GPT-4 and GPT-3.5. Note that all graphs are zoomed in.
-----
**Input Prompt**
Solve the problem accurately using {approach name}.
Question: John found that the average of 15 numbers is 40. If 10 is added to
each number then the mean of the numbers
Answer Choices: (A) 50 (B) 45 (C) 65 (D) 78 (E) 64
A: If 10 is added to each number, then the mean of the numbers also increases by 10. So
the new mean would be 50. The answer is (A).
Approach <using {approach name} >:
**Augmented Few-shot Examples**
**Approach <using visualization>:**
To solve this problem using visualization, let’s consider the given information and the effect
of adding 10 to each number on the average.
1. The average of 15 numbers is 40. This means that the sum of these 15 numbers is 15
- 40 = 600.
2. Now, we add 10 to each number. Since there are 15 numbers, the total added is 15 *
10 = 150.
3. The new sum of the 15 numbers is 600 + 150 = 750.
4. To find the new average, we divide the new sum by the number of values, which is
750 / 15 = 50. So, the mean of the numbers after adding 10 to each number is 50. The
answer is: (A) 50
**Approach <using logical reasoning>:**
We know that adding the same constant to each number in a set will increase the average
by that constant. In this case, we are adding 10 to each number, so the average will
increase by 10. New average = 40 + 10 = 50 The answer is: (A) 50
**Approach <using elimination strategy>:**
We know that adding 10 to each number will increase the average by 10. So, we’re looking
for an answer choice that is 10 more than the original average of 40.
(A) 50 is 10 more than 40.
(B) 45 is only 5 more than 40.
(C) 65 is 25 more than 40.
(D) 78 is 38 more than 40.
(E) 64 is 24 more than 40.
Only (A) 50 is exactly 10 more than the original average of 40, so that’s the correct answer.
The other choices can be eliminated because they are not 10 more than the original average.
**Approach <using algebra>:**
Let’s denote the sum of the 15 numbers as S. Since the average of the 15 numbers is 40,
we can write the equation: S / 15 = 40 S = 40 * 15 = 600 Now, we add 10 to each
number, so the new sum is S + 15 * 10 = 600 + 150 = 750. The new average is: New
average = 750 / 15 = 50 The answer is: (A) 50
Figure 8: Illustration of Augmentation of few-shot examples - Algorithm 1, Step 4
-----
I am playing with a set of blocks where I need to arrange the blocks into stacks.
[STATEMENT]
As initial conditions I have that, the orange block is clear, the hand is empty, the blue
block is on top of the red block, the orange block is on top of the blue block and the red
block is on the table. My goal is to have that the red block on top of the blue block and
the orange block on top of the red block.
Here are the actions I can do:
Pick up a block from the table
Unstack a block from on top of another block
Put down a block on the table
Stack a block on top of another block
I have the following restrictions on my actions:
I can only pick up or unstack one block at a time.
I can only pick up or unstack a block if my hand is empty.
I can only pick up a block if the block is on the table and the block is clear. A block is
clear if the block has no other blocks on top of it and if the block is not picked up.
I can only unstack a block from on top of another block if the block I am unstacking was
really on top of the other block.
I can only unstack a block from on top of another block if the block I am unstacking is
clear.
Once I pick up or unstack a block, I am holding the block.
I can only put down a block that I am holding.
I can only stack a block on top and not under of another block if I am holding the block
being stacked.
I can only stack a block on top and not under of another block if the block onto which I
am stacking the block is clear.
Once I put down or stack a block, my hand becomes empty.
Once you stack a block on top of a second block, the second block is no longer clear.
What is the plan to achieve my goal? Just give the actions in the plan.
[PLAN]
Figure 9: Zero-shot prompt used in the baseline run of the Planning - Blocksworld Domain
-----
You are playing with a set of blocks where you need to arrange the blocks into stacks.
What is the plan to achieve the goal?
<Initial State> : As initial conditions you have that, the orange block is clear, the
hand is empty, the blue block is on top of the red block, the orange block is on top of the
blue block and the red block is on the table.
<Goal State> : Your goal is to have that the red block on top of the blue block and the
orange block on top of the red block.
Here are the actions you can do:
-Pick up a block from the table
-Unstack a block from on top of another block
-Put down a block on the table
-Stack a block on top of another block
Rules:
1. You can only pick up or unstack one block at a time.
2. You can only pick up or unstack a block if your hand is empty.
3. You can only pick up a block if the block is on the table and the block is clear. A block
is clear if the block has no other blocks on top of it and if the block is not picked up.
4. You can only unstack a block from on top of another block if the block you are
unstacking was really on top of the other block.
5. You can only unstack a block from on top of another block if the block you are
unstacking is clear.
6. Once you pick up or unstack a block, you are holding the block.
7. You can only put down a block that you are holding.
8. You can only stack a block on top and not under of another block if you are holding
the block being stacked.
9. You can only stack a block on top and not under of another block if the block onto
which you are stacking the block is clear.
10. Once you put down or stack a block, your hand becomes empty.
11. Once you stack a block on top of a second block, the second block is no longer clear.
Using a finite state machine and a search algorithm what is the plan to achieve the
goal? You can model each state of the blocks configuration on the table and the hand as
a state. For each action step check that the step follows the rules and that the step brings
you closer to the goal. After each action describe the state of the table and hand. Always
check whether the final state satisfies the goal mentioned. <Goal State> : Your goal is to
have that the red block on top of the blue block and the orange block on top of the red block.
[PLAN]
Figure 10: The Zero-shot prompt using Finite State Machine Approach for solving the Planning - Blocksworld
Domain Problem.
-----
You are playing with a set of blocks where you need to arrange the blocks into stacks.
<Initial State> : As initial conditions you have that, the orange block is clear, the
hand is empty, the blue block is on top of the red block, the orange block is on top of the
blue block and the red block is on the table.
<Goal State> : Your goal is to have that the red block on top of the blue block
and the orange block on top of the red block.
Here are the actions you can do:
-Pick up a block from the table
-Unstack a block from on top of another block
-Put down a block on the table
-Stack a block on top of another block
Rules:
1. You can only pick up or unstack one block at a time.
2. You can only pick up or unstack a block if your hand is empty.
3. You can only pick up a block if the block is on the table and the block is clear. A block
is clear if the block has no other blocks on top of it and if the block is not picked up.
4. You can only unstack a block from on top of another block if the block you are
unstacking was really on top of the other block.
5. You can only unstack a block from on top of another block if the block you are
unstacking is clear.
6. Once you pick up or unstack a block, you are holding the block.
7. You can only put down a block that you are holding.
8. You can only stack a block on top and not under of another block if you are holding
the block being stacked.
9. You can only stack a block on top and not under of another block if the block onto
which you are stacking the block is clear.
10. Once you put down or stack a block, your hand becomes empty.
11. Once you stack a block on top of a second block, the second block is no longer clear.
Thinking like Alan Turing starting from the <Initial State> build a plan to get to
the <Goal State>. For each action step carefully check that the step follows the rules.
<Goal State> : Your goal is to have that the red block on top of the blue block and the
orange block on top of the red block.
output format for each step until you reach the goal state:
<state> : <state>
<action> : < action to be performed in this step >
<assess the action> : < are we building the stack bottom up, check carefully>
Figure 11: The Zero-shot prompt used with the persona of Alan Turing and Action Rationale approach for solving
the Planning - Blocksworld Domain Problem.
-----
You are playing with a set of blocks where you need to arrange the blocks into stacks.
<Initial State> : As initial conditions you have that, the orange block is clear, the
hand is empty, the blue block is on top of the red block, the orange block is on top of the
blue block and the red block is on the table.
<Goal State> : Your goal is to have that the red block on top of the blue block and the
orange block on top of the red block.
Here are the actions you can do:
-Pick up a block from the table
-Unstack a block from on top of another block
-Put down a block on the table
-Stack a block on top of another block
Rules:
1. You can only pick up or unstack one block at a time.
2. You can only pick up or unstack a block if your hand is empty.
3. You can only pick up a block if the block is on the table and the block is clear. A block
is clear if the block has no other blocks on top of it and if the block is not picked up.
4. You can only unstack a block from on top of another block if the block you are
unstacking was really on top of the other block.
5. You can only unstack a block from on top of another block if the block you are
unstacking is clear.
6. Once you pick up or unstack a block, you are holding the block.
7. You can only put down a block that you are holding.
8. You can only stack a block on top and not under of another block if you are holding
the block being stacked.
9. You can only stack a block on top and not under of another block if the block onto
which you are stacking the block is clear.
10. Once you put down or stack a block, your hand becomes empty.
11. Once you stack a block on top of a second block, the second block is no longer clear.
Thinking like Alan Turing, starting from the <Initial State> build a plan to get to
the <Goal State> . For each action step carefully check that the step follows the
rules. Divide the task into smaller steps, starting with placing the bottom block first,
followed by the middle blocks, and finally the top block. <Goal State> : Your goal is to
have that the red block on top of the blue block and the orange block on top of the red block.
[PLAN]
Figure 12: The Zero-shot prompt used in the Alan Turing + Progressive Block Placement Approach for solving the
Planning - Blocksworld Domain Problem.
-----
As a math professor, you will solve the given problem accurately ’<using algebra>’. If
there is no exact match choose the closest option.
Question: John found that the average of 15 numbers is 40. If 10 is added to each number
then the mean of the numbers
Answer Choices: (A) 50 (B) 45 (C) 65 (D) 78 (E) 64
Approach: <using algebra>
Let’s denote the sum of the 15 numbers as S. Since the average of the 15 numbers is 40,
we can write the equation: S / 15 = 40 S = 40 * 15 = 600 Now, we add 10 to each
number, so the new sum is S + 15 * 10 = 600 + 150 = 750. The new average is: New
average = 750 / 15 = 50
The answer is: (A) 50
Question: If a / b = 3/4 and 8a + 5b = 22,then find the value of a.
Answer Choices: (A) 1/2 (B) 3/2 (C) 5/2 (D) 4/2 (E) 7/2
Approach: <using algebra>
To solve the given problem, we can use substitution. Since a / b = 3/4, we can write that
as: a = 3b/4 Now, substitute this expression for a into the second equation: 8(3b/4) +
5b = 22 Simplify and solve for b: 6b + 5b = 22 11b = 22 b = 2 Now that we have the
value of b, we can find the value of a: a = 3b/4 a = 3(2)/4 a = 6/4 a = 3/2 So, the
value of a is (B) 3/2.
The answer is: (B) 3/2
Question: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then find
the distance?
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
Approach: <using algebra>
Using the formula distance = speed × time, we can calculate the distance as follows:
Distance = 20 km/hr × 2.5 hr = 50 km So, the closest answer choice is (e) 50 km.
The answer is: (e) 50 km
Question: How many keystrokes are needed to type the numbers from 1 to 500?
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
Approach: <using algebra>
Let’s break down the number of keystrokes needed into groups based on the number of
digits: One-digit numbers (1-9): There are 9 one-digit numbers, so we need 9 keystrokes.
Two-digit numbers (10-99): There are 90 two-digit numbers, each requiring 2 keystrokes,
so we need 90 * 2 = 180 keystrokes. Three-digit numbers (100-500): There are 401
three-digit numbers (500 - 100 + 1), each requiring 3 keystrokes, so we need 401 * 3 =
1203 keystrokes. Now let’s add up the keystrokes from all groups: 9 + 180 + 1203 =
1392. Therefore, the closest option is (b) 1392.
The answer is: (b) 1392
Figure 13: AQuA few-shot-CoT prompt using algebraic approach.
-----
Think like Alan Turing and solve the problem.If there is no exact match choose the closest
option.
Question: John found that the average of 15 numbers is 40. If 10 is added to
each number then the mean of the numbers
Answer Choices: (A) 50 (B) 45 (C) 65 (D) 78 (E) 64
Approach : <think like Alan Turing>
Let’s analyze the problem step by step: The average of 15 numbers is 40. So, the sum of
these 15 numbers is 15 * 40 = 600. 10 is added to each number. Now, since there are 15
numbers, the total increase in the sum will be 15 * 10 = 150. The new sum of these 15
numbers will be 600 + 150 = 750. To find the new mean, divide the new sum by the total
number of values, which is 15. New mean = 750 / 15 = 50. Therefore, the closest option
to the new mean is (A) 50.
The answer is: (A) 50
Question: If a / b = 3/4 and 8a + 5b = 22,then find the value of a.
Answer Choices: (A) 1/2 (B) 3/2 (C) 5/2 (D) 4/2 (E) 7/2
Approach : <think like Alan Turing>
Let us analyze the given information and solve the problem step by step. We are given
that a / b = 3/4, which can be written as a = (3/4) * b. We are also given the equation
8a + 5b = 22. Substitute the value of a from step 1 into the equation from step 2: 8 *
(3/4) * b + 5b = 22 Simplify the equation: 6b + 5b = 22 Combine the terms and solve
for b: 11b = 22 b = 2 Now that we have the value of b, let’s find the value of a using the
equation from step 1: a = (3/4) * b a = (3/4) * 2 Simplify to find the value of a: a =
3/2 Thus, the value of a is 3/2, which corresponds to answer choice (B).
The answer is: (B)
Question: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then
find the distance?
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
Approach : <think like Alan Turing>
To find the distance, we can use the formula: distance = speed × time. In this case, the
speed is 20 km/hr and the time is 2.5 hours. Distance = 20 km/hr × 2.5 hr = 50 km So,
the closest option is (e) 50 km.
The answer is: (e) 50 km
Question: How many keystrokes are needed to type the numbers from 1 to 500?
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
Approach : <think like Alan Turing>
To determine the number of keystrokes needed to type the numbers from 1 to 500, let’s
break it down by digit count: One-digit numbers (1-9): There are 9 one-digit numbers,
each requiring 1 keystroke. Total: 9 keystrokes. Two-digit numbers (10-99): There
are 90 two-digit numbers, each requiring 2 keystrokes. Total: 90 * 2 = 180 keystrokes.
Three-digit numbers (100-500): There are 401 three-digit numbers, each requiring 3
keystrokes. Total: 401 * 3 = 1203 keystrokes. Adding the keystrokes for each group, we
get: 9 + 180 + 1203 = 1392 keystrokes. Thus, the closest option is (b) 1392
The answer is: (b) 1392
Figure 14: AQuA few-shot-CoT prompt with the persona of Alan Turing.
-----
Think like Dr. Patel, the mathematician who specializes in algebra and solve the problem.
If there is no exact match choose the closest option.
Question: John found that the average of 15 numbers is 40. If 10 is added to
each number then the mean of the numbers
Answer Choices: (A) 50 (B) 45 (C) 65 (D) 78 (E) 64
Approach : <think like Dr. Patel>
Let’s consider the problem step by step. John found that the average of 15 numbers is 40.
To find the sum of these 15 numbers, we can use the formula: Sum = Average × Number
of elements Sum = 40 × 15 Sum = 600 Now, 10 is added to each of the 15 numbers. So,
we can find the new sum by adding 10 × 15 = 150 to the original sum: New Sum = 600
+ 150 New Sum = 750 To find the new average (mean), we can use the formula: New
Average = New Sum ÷ Number of elements New Average = 750 ÷ 15 New Average = 50
The answer is: (A) 50
Question: If a / b = 3/4 and 8a + 5b = 22,then find the value of a.
Answer Choices: (A) 1/2 (B) 3/2 (C) 5/2 (D) 4/2 (E) 7/2
Approach : <think like Dr. Patel>
To solve this problem, first we can rewrite the equation a / b = 3/4 as a = (3/4)b. Now
we can substitute this expression for a into the second equation: 8a + 5b = 22. Substitute
a with (3/4)b: 8(3/4)b + 5b = 22 Simplify the equation: 6b + 5b = 22 Combine the
terms: 11b = 22 Now, divide both sides by 11: b = 2 Now that we have the value of b,
we can find the value of a using the equation a = (3/4)b: a = (3/4)(2) a = 3/2 The
value of a is 3/2. So, the correct answer is (B) 3/2.
The answer is: (B) 3/2
Question: A person is traveling at 20 km/hr and reached his destiny in 2.5 hr then
find the distance?
Answer Choices: (a) 53 km (b) 55 km (c) 52 km (d) 60 km (e) 50 km
Approach : <think like Dr. Patel>
To find the distance, we can use the formula: Distance = Speed × Time Here, the speed
is given as 20 km/hr and the time taken is 2.5 hours. So, we can plug these values into
the formula: Distance = 20 km/hr × 2.5 hr Distance = 50 km
The answer is: (e) 50 km
Question: How many keystrokes are needed to type the numbers from 1 to 500?
Answer Choices: (a) 1156 (b) 1392 (c) 1480 (d) 1562 (e) 1788
Approach : <think like Dr. Patel>
To determine the number of keystrokes needed to type the numbers from 1 to 500, let’s
break down the problem by analyzing the number of digits in each range: For numbers
1-9, there are 9 numbers with 1 digit each, requiring 9 keystrokes. For numbers 10-99,
there are 90 numbers with 2 digits each, requiring 90 * 2 = 180 keystrokes. For numbers
100-499, there are 400 numbers with 3 digits each, requiring 400 * 3 = 1200 keystrokes.
For the number 500, it has 3 digits, requiring 3 keystrokes. Now, we can sum up the
keystrokes for each range: 9 (for 1-9) + 180 (for 10-99) + 1200 (for 100-499) + 3 (for
500) = 1392 keystrokes.
The answer is: (b) 1392
Figure 15: AQuA few-shot-CoT prompt with the persona of Dr. Patel: A renowned mathematician
-----
As a math professor, use 3 distinct approaches and without using built-in algorithms, write
python programs to color the following graph, described as a set of edges, such that no
two vertices on the same edge share a color.
You may use at most 3 colors.
Vertex 0 is connected to vertex 7.
Vertex 0 is connected to vertex 8.
Vertex 0 is connected to vertex 9.
Vertex 0 is connected to vertex 11.
Vertex 1 is connected to vertex 13.
Vertex 2 is connected to vertex 9.
Vertex 3 is connected to vertex 8.
Vertex 3 is connected to vertex 11.
Vertex 3 is connected to vertex 12.
Vertex 4 is connected to vertex 12.
Vertex 5 is connected to vertex 11.
Vertex 6 is connected to vertex 9.
Vertex 7 is connected to vertex 10.
Vertex 7 is connected to vertex 13.
Vertex 9 is connected to vertex 11.
Vertex 10 is connected to vertex 13.
Vertex 11 is connected to vertex 13.
There are a total of 14 vertices. Please label every vertex, even if it is disconnected from
the rest of the graph.Please provide each vertex’s color. Do not skip any vertices. Each
color must be provided on a new line in the response and should be formatted as "VERTEX
NUMBER: VERTEX COLOR ASSIGNMENT (Color n)".
Output format:
Approach 1 <name of the approach> : < python program from scratch to color the given
graph accurately >
Approach 2 <name of the approach> : < python program from scratch to color the given
graph accurately>
Approach 3 <name of the approach> : < python program from scratch to color the given
graph accurately>
Figure 16: Graph Coloring prompt using a programming approach in the zero-shot setting.
-----
| [
"Ranjita, Naik",
"Varun, Chandrasekaran",
"Besmira, Nushi",
"Hamid, Palangi",
"Mert, Yuksekgonul"
] | 2024-02-23T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2310.07088 | https://arxiv.org/abs/2310.07088 | https://www.semanticscholar.org/paper/0d943aa547690c40aff35b4e0b329bf04aedc59d |
Do Language Models See a Space? | N/A | null | [
", Jan"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | https://aitp-conference.org/2024/abstract/AITP_2024_paper_25.pdf | null | null |
|
Don't Trust: Verify -- Grounding LLM Quantitative Reasoning with Autoformalization | Large language models (LLM), such as Google's Minerva and OpenAI's GPT families, are becoming increasingly capable of solving mathematical quantitative reasoning problems. However, they still make unjustified logical and computational errors in their reasoning steps and answers. In this paper, we leverage the fact that if the training corpus of LLMs contained sufficiently many examples of formal mathematics (e.g. in Isabelle, a formal theorem proving environment), they can be prompted to translate i.e. autoformalize informal mathematical statements into formal Isabelle code --- which can be verified automatically for internal consistency. This provides a mechanism to automatically reject solutions whose formalized versions are inconsistent within themselves or with the formalized problem statement. We evaluate our method on GSM8K, MATH and MultiArith datasets and demonstrate that our approach provides a consistently better heuristic than vanilla majority voting --- the previously best method to identify correct answers, by more than 12\% on GSM8K. In our experiments it improves results consistently across all datasets and LLM model sizes. | null | ## DON’T TRUST: VERIFY – GROUNDING LLM QUANTI### TATIVE REASONING WITH AUTOFORMALIZATION
**Jin Peng Zhou[∗]** **Charles Staats[†]** **Wenda Li** **Christian Szegedy[†]**
Cornell University University of Edinburgh xAI
**Kilian Q. Weinberger** **Yuhuai Wu[†]**
Cornell University xAI
ABSTRACT
Large language models (LLM), such as Google’s Minerva and OpenAI’s GPT
families, are becoming increasingly capable of solving mathematical quantitative
reasoning problems. However, they still make unjustified logical and computational
errors in their reasoning steps and answers. In this paper, we leverage the fact
that if the training corpus of LLMs contained sufficiently many examples of
formal mathematics (e.g. in Isabelle, a formal theorem proving environment), they
can be prompted to translate i.e. autoformalize informal mathematical statements
into formal Isabelle code — which can be verified automatically for internal
consistency. This provides a mechanism to automatically reject solutions whose
formalized versions are inconsistent within themselves or with the formalized
problem statement. We evaluate our method on GSM8K, MATH and MultiArith
datasets and demonstrate that our approach provides a consistently better heuristic
than vanilla majority voting — the previously best method to identify correct
answers, by more than 12% on GSM8K. In our experiments it improves results
consistently across all datasets and LLM model sizes. The code can be found at
[https://github.com/jinpz/dtv.](https://github.com/jinpz/dtv)
1 INTRODUCTION
Recently, language models (Devlin et al., 2018; Brown et al., 2020; Chowdhery et al., 2022) have
advanced significantly in many natural language processing tasks such as machine translation,
question answering, summarization, etc. More recent large language models (LLMs) such as Minerva (Lewkowycz et al., 2022), GPT3.5 (OpenAI) and GPT4 (OpenAI, 2023) have also become
increasingly capable of solving quantitative reasoning problems, ranging from middle school math
word problems (Cobbe et al., 2021) to challenging high school mathematical competition problems (Hendrycks et al., 2021). By training or finetuning the model on high-quality natural language
mathematical and scientific text, these LLMs can generate self-contained step-by-step solutions to
quantitative reasoning problems without relying on external tools. However, just like human beings,
the solutions LLMs generate are prone to simple calculation errors and unjustified logical leaps.
Following Wang et al. (2022); Lewkowycz et al. (2022), one can sample many proposed solutions,
extract the final answer from each, and select the most common answer. While aggregating answers
like this improves performance at the problem level, the most common answer is sometimes wrong.
Ideally, we would like a better heuristic to identify the correct answer. In fact, there are well-known
techniques for computers to verify mathematical reasoning using formalization (Wiedijk, 2008).
Those methods involve translating the problem and sometimes the solution into a formal language.
Unfortunately this translation is difficult and time-intensive enough that formalization methods are
less frequently leveraged.
Recently it was discovered that large language models, with few-shot prompting, can automatically
formalize natural mathematical language into formal languages (Agrawal et al., 2022; Azerbayev
_∗Work done while interning at Google Research. Correspondence to: [email protected]._
_†Work done while at Google Research._
-----
|Informal Reasoning Informal Statement Find all positive roots of x2 −4 = 0. LLM Informal Solution We know that if x2 −4 = 0, then x2 = 4. Hence x = ± 2. Since x is positive, x = 2 . Answer|Formal Reasoning Formal Statement theorem “(x::real)^2 - 4 = 0 ∧ x > 0 ⟷ x = 2” Formal Solution assume “x^2 - 4 = 0” and pos: “0 < x” then have “x = 2 ∨ x = -2” [ATP] with pos show “x = 2” [ATP]|
|---|---|
**Informal Statement**
Find all positive roots
of x[2] −4 = 0.
**Formal Statement**
theorem
“(x::real)^2 - 4 = 0
∧ x > 0 ⟷ x = 2”
**Informal Solution**
We know that if x[2] −4 = 0,
then x[2] = 4. Hence x = ± 2.
Since is positive, x _x =_ .2
**Formal Solution**
assume “x^2 - 4 = 0”
and pos: “0 < x”
then have “x = 2 x = -2”∨
[ATP]
with pos show “x = 2”
[ATP]
**Informal Solution #1** **Informal Solution #2**
…, hence x = . 2 …, hence x = ±2 .
**Formal Statement #1** **Formal Statement #2**
theorem theorem
“(x::real)^2 - 4 = 0 “(x::real)^2 - 4 = 0
∧ x > 0 ⟷ x = 2” ∧ x > 0 ⟷ x ∈ {−2,2}”
**…**
**Formal Solution #1** **Formal Solution #2**
assume “x^2 - 4 = 0” assume “x^2 - 4 = 0”
and pos: “0 < x” and “0 < x”
then have “x = 2 x = -2”∨ then have “x = 2 x = -2”∨
[ATP] [ATP]
with pos show “x = 2” then show “x ∈{−2,2}”
[ATP] [ATP]
**Automated**
∀¬ ∀¬
⟹ ∧ **Theorem Prover** ⟹ ∧
**Verified** **Not Verified**
Figure 1: A pictorial illustration of Don’t Trust: Verify. Left: starting from an informal statement,
an informal solution is generated by a large language model. The informal statement and solution
are then translated into their formal counterparts. Multiple informal solutions for each problem
are generated with temperature sampling. Right: An automated theorem prover in the formal
environment is used to verify formal solutions against formal statements step by step (indicated by
[ATP]). The final answer is chosen using majority voting over only the verified solutions. In this
example, Formal Solution #1 successfully proves Formal Statement #1. Formal Solution #2, however,
fails to prove the ”only if” direction of Formal Statement #2.
et al., 2023; Wu et al., 2022; Jiang et al., 2023; Wang et al., 2020). It is conjectured this capability
arises because the training data for these LLMs contains sufficiently many examples of computer
code and/or formal mathematics. While the translation is far from perfect and can fail on many
complex statements, the correctness of the statements produced can be checked using formal theorem
proving systems such as Isabelle (Nipkow et al., 2002) and Lean (de Moura et al., 2015).
In this paper, we demonstrate that in spite of its issues, autoformalization is already capable enough to
identify correct answers for many quantitative reasoning problems. We call our method Don’t Trust:
_Verify (DTV) and provide an overview in Figure 1. Intuitively, instead of naively taking majority_
voting of all generated natural language solutions, we only aggregate solutions that can be verified
with autoformalization. To carry out verification, both a formal statement and solution are needed.
Therefore, we attempt to translate the plain text problem and solution into formal language using large
language models. However, because language models can erroneously translate incorrect statements
into correct ones, we develop both symbolic and neural filters to improve the statement translation
reliability. Additionally, even with a correct formal statement, a formal solution directly translated
from informal solution can fail to prove it. In particular, correct informal solutions (whether generated
by humans or LLMs) skip low-level steps that are necessary for formal reasoning. To this end, we
instead generate a formal solution sketch following Wiedijk (2004); Jiang et al. (2023) and employ
an automated theorem prover (ATP) to fill in the low-level gaps.
We evaluate DTV on GSM8K (Cobbe et al., 2021), three subsets of MATH (Hendrycks et al.,
2021) following prior work (Zheng et al., 2021), and MultiArith (Roy & Roth, 2016) datasets. The
results show that our method consistently outperforms vanilla majority voting (Wang et al., 2022;
Lewkowycz et al., 2022), with a more than 12% improvement on GSM8K. We demonstrate that
DTV improves performance across various model sizes and categories. Additionally, we provide case
studies on our method identifying correct answers as well as informal solutions. Finally, we discuss
the limitations of our approach inherited from LLM and theorem proving environments.
2 BACKGROUND AND RELATED WORK
**Informal quantitative reasoning with language models. A number of large language models**
such as PaLM (Chowdhery et al., 2022), Minerva (Lewkowycz et al., 2022), GPT3.5 (OpenAI)
and GPT4 (OpenAI, 2023) have demonstrated their impressive quantitative reasoning abilities
by pretraining or finetuning on mathematical and science data. To improve informal reasoning
-----
performance, chain-of-thought prompting (Wei et al., 2022) is typically used to encourage language
models outputting intermediate reasoning steps before arriving at an answer. A diverse set of
prompting methods have since been proposed for informal reasoning that tries to find better examples
for prompting (Fu et al., 2023; Zhang et al., 2022) or better strategies for decoding from the model
such as conditioning on references, multi-step decoding (Creswell et al., 2022; Zheng et al., 2023;
Welleck et al., 2022; Khot et al., 2023). Cobbe et al. (2021) explores training informal verifiers to
judge the correctness of reasoning. Additionally, to alleviate erroneous reasoning with language
models, techniques that explore multi-sample consistency in the informal reasoning (Wang et al.,
2022; Jung et al., 2022) have also been proposed. Our approach does not require training and is
complementary to these methods since we rely on the consistency of a formal theorem proving system
to identify correct reasoning paths and improve the reliability of large language model reasoning.
**Augmented language models. Besides elucidating rationales via chain-of-thought prompting,**
recent research has also been devoted to augmenting LLMs with external tools such as web search
engine (Nakano et al., 2021; Lazaridou et al., 2022; Schick et al., 2023), external memory retrieval (Shi
et al., 2023; Izacard et al., 2022) and programming-based calculators (Chen et al., 2022; Gao et al.,
2022; Imani et al., 2023) to bolster their downstream performance and reduce hallucination. Our
work augments language models with a formal theorem proving environment that goes beyond simple
arithmetic (Chen et al., 2022; Imani et al., 2023) and Boolean logic (Jung et al., 2022).
**Interactive theorem proving and autoformalization. Modern interactive theorem provers such as**
Isabelle (Nipkow et al., 2002), Coq (Barras et al., 1997), and Lean (de Moura et al., 2015), provide
an interactive environment to encode and mechanically verify mathematical proofs. Success stories
with handcrafted proofs include the verification of industrial software systems (Klein et al., 2009)
and research-level mathematics (Castelvecchi et al., 2021). Szegedy (2020) argues for automatically
obtaining formal mathematics from their informal counterparts by translation i.e. autoformalization.
Since then, researchers have shown the feasibility of autoformalization using neural networks in
particular large language models (Agrawal et al., 2022; Azerbayev et al., 2023; Wu et al., 2022; Jiang
et al., 2023; Wang et al., 2020). Our work which aims to ground LLM reasoning with formal theorem
proving environments is distinct from Jiang et al. (2023) that improves theorem proving performance
with LLM. DTV assumes a much more difficult but realistic setting where only natural language data
is available. We are the first to demonstrate it is possible to automatically verify correct answer and
informal solutions with autoformalization.
3 DON’T TRUST: VERIFY
We describe our approach Don’t Trust: Verify (DTV). Given a mathematical question and several
solutions to it phrased in natural language, we assume the question has a well-defined answer and
each solution contains an answer that may or may not be correct. Our goal is to understand which
informal solutions are more likely to be correct. An overview of our method can be found in Figure 1.
3.1 STATEMENT FORMALIZATION
We begin by finding a formal statement that corresponds to the description of the informal problem
statement. In many mainstream formal theorem proving environments such as Isabelle (Nipkow et al.,
2002), Lean (de Moura et al., 2015), and Coq (Barras et al., 1997), a formal statement needs to be
an assertion rather than a question. For example, a formal statement cannot be in the form of Is π
_irrational? but rather Show that π is irrational. Since quantitative reasoning problems typically ask_
for an answer rather than a proof, we first extract the answer from each proposed informal solution.
The formal statement is then generated conditioned on both the plain text statement and the extracted
answer. Following Wu et al. (2022), we leverage the few-shot learning capability of large language
models to generate formal representation of the informal statements. Specifically, we provide a
few informal-formal statement translation pairs and prompt the language model to complete the
subsequent problem statement formalization.
3.2 SOLUTION FORMALIZATION AND VERIFICATION
The formalized statement itself does not tell us its correctness without a formal solution or proof.
Because of this, we seek to generate a piece of formal solution to verify the statement correctness.
-----
Ideally, formal solution steps could be directly translated from natural language solution steps by
sharing a similar level of abstraction. However, this could fail to prove the statement since formal
reasoning steps require additional low-level justification than their natural language counterparts
either written by human or LLMs. To address this issue, we generate a formal solution sketch
following Wiedijk (2004); Jiang et al. (2023) (see Figure 1 (left)). The formal solution sketch contains
high-level steps that are based on natural language counterparts and leaves low-level justifications to
an automated theorem prover (indicated as [ATP] in Figure 1). Similar to statement formalization,
we few-shot prompt a large language model to automatically generate such formal solution sketches.
To perform verification, we attempt to prove the formal statement with individual solution steps in a
formal theorem proving environment. We leverage the consistency of the formal environment and
its automated theorem prover to sequentially verify the steps. At any step, if the automated theorem
prover fails to close the gap, we consider the formal statement not verified and hence incorrect.
Besides, if there are still remaining goals to be proved after verifying every step, the formal statement
is also treated as unverified. For example, in Figure 1 (right) Formal Solution #2, although both steps
are correct, the formal statement cannot be proved since the original statement is an if and only if
statement, and the right-to-left direction is clearly false. If a formal statement is considered verified,
the corresponding informal solution and answer are also considered verified. To arrive at the final
answer to the problem, we take the most common answer among the verified informal solutions. If
no solution can be verified, we fallback to majority voting across all unverified solutions.
3.3 FILTERING UNFAITHFUL STATEMENT FORMALIZATIONS
For DTV to achieve good performance, it is crucial that informal and formal statements match
precisely with each other. This is because an answer that is definitely incorrect to one problem
could still be correct to an altered problem. For example, in the problem statement of Figure 1, if
the constraint of x being positive is omitted, the answer x = ±2 would be correct instead. This is
detrimental as the erroneously translated formal statements with incorrect answers can get verified by
DTV. We call such statements unfaithful. To mitigate this issue, we propose to employ two types of
filters to discard formal statements that are potentially unfaithful.
**Vacuous statement filter. In our preliminary experiments, we found that vacuous formal statements**
form one common category of verified but unfaithful statement translations. By vacuous we mean that
the formalized statement’s hypotheses are contradictory, and can thus be used to prove anything by
contradiction. The contradictory hypotheses are usually due to language model translation mistakes
rather than the original problem itself being contradictory. For instance, a translated statement could
constrain a variable x to simultaneously satisfy x > 0 and x < 0, leading to a contradiction. To
address this issue, we replace the formal statement goal with a simple statement of False and apply
the automated theorem prover. Any formal statement that can be proved after this substitution is
vacuous. We assume it is very unlikely that the natural language question is legitimately vacuous and
we discard such vacuous formalizations.
**Self-critique filter. There are other categories of unfaithful translations we find harder to identify**
symbolically with automated theorem provers. Formalization attempts generated by LLMs can
sometimes be irrelevant to the problem or will even modify a formula to make the statement correct.
Inspired by the fact that large language models are capable of critiquing their own outputs (Saunders
et al., 2022), we explicitly ask the language model to decide whether the formal statement is a faithful
translation of the informal statement. The likelihood of a yes versus no followed by a justification is
compared. If the likelihood of no outweighs yes, the formal statement is considered unfaithful and
discarded. It is worth noting that the self-critique filter can reject faithful formal statements as well as
miss unfaithful formal statements. We empirically verify its effectiveness in the experiment section.
4 EXPERIMENTS
4.1 DATASET AND FORMAL ENVIRONMENT
We evaluate DTV on three quantitative reasoning datasets: GSM8K (Cobbe et al., 2021),
MATH (Hendrycks et al., 2021) and MultiArith (Roy & Roth, 2016). These datasets have been used
extensively to evaluate the reasoning capability of large language models (Cobbe et al., 2021; Wei
et al., 2022; Lewkowycz et al., 2022; Zhang et al., 2022; OpenAI, 2023). GSM8K and MultiArith
-----
**Informal Statement from Prealgebra: At Annville Junior High School, 30% of the students in the**
Math Club are in the Science Club, and 80% of the students in the Science Club are in the Math Club.
There are 15 students in the Science Club. How many students are in the Math Club? Show that it is 40.
**Generated Formal Statement:**
**theorem**
**fixes math_club science_club math_and_science :: nat**
**assumes "math_and_science = 0.3 * math_club"**
**and "math_and_science = 0.8 * science_club"**
**and "science_club = 15"**
**shows "math_club = 40"**
**Informal Statement from Number Theory: What is the hundreds digit in the following product:**
5 × 6 × 7 × 8 × 9 × 10 ? Show that it is 2.
**Generated Formal Statement:**
**theorem**
"((5::nat) * 6 * 7 * 8 * 9 * 10) div 100 mod 10 = 2"
**Informal Statement from GSM8K: A herd consists of camels and dromedaries. There are 180 heads**
and 304 bumps. How many dromedaries are there if camels have two humps each and dromedaries
have one hump each? Show that it is 56.
**Generated Formal Statement:**
**theorem**
**fixes camels dromedaries :: nat**
**assumes "camels + dromedaries = 180"**
**and "2 * camels + dromedaries = 304"**
**shows "dromedaries = 56"**
**Informal Statement from Algebra: A parabola ax[2]** + bx + c contains the points (−1, 0), (0, 5), and
(5, 0). Find the value 100a + 10b + c. Show that it is -55.
**Generated Formal Statement:**
**theorem**
**fixes a b c :: real**
**assumes h0 : "a * (-1)ˆ2 + b * (-1) + c = 0"**
**and h1 : "a * 0ˆ2 + b * 0 + c = 5"**
**and h2 : "a * 5ˆ2 + b * 5 + c = 0"**
**shows "100 * a + 10 * b + c = -55"**
Figure 2: Examples of faithful formal statements translated from informal statements with correct
answers by DTV. Majority voting failed to solve all four problems but DTV solves them successfully.
The model is capable of translating complex informal statements precisely. Note that the sentence
“Show that it is [answer]” is appended to the original informal problem statement by first extracting
the answer from an informal solution. We provide more examples in Appendix A.1.
datasets contain grade-school arithmetic word problems. MATH dataset consists of high school mathematical competition problems drawn from AMC 10, AMC 12, AIME, etc that are more challenging.
The problems in MATH have also been grouped into 7 categories: Prealgebra, Algebra, Number
Theory, Counting and Probability, Geometry, Intermediate Algebra, and Precalculus. Following prior
work (Zheng et al., 2021) which only draws problems from algebra and number theory categories, we
evaluate on Prealgebra, Algebra and Number Theory subsets of the MATH dataset, which are most
applicable in current theorem proving environments.
Similar to Wu et al. (2022); Jiang et al. (2023), we use Isabelle (Nipkow et al., 2002) as the formal
theorem proving environment for the experiments since it has one of the largest formal corpora that
enables formalization of challenging problems as well as powerful automated theorem provers to
close low-level details in the formal solution sketches. We do not see fundamental limitations of our
-----
DTV framework applying to other formal languages such as Lean and Mizar and we leave this to
future work. We adopt the Portal-to-ISAbelle (Jiang et al., 2021) interface to interact with Isabelle.
4.2 BASELINES AND EVALUATION PROTOCOL
We compare DTV with two baselines: single sample and multiple samples with majority voting (Wang
et al., 2022; Lewkowycz et al., 2022). In single sample, for each problem, only one informal solution
is sampled greedily with T = 0. For the latter, multiple informal solutions are generated with
temperature sampling. To evaluate the performance, we consider a problem being solved correctly if
the most common answer matches the ground truth answer. For single sample, the most common
answer is the answer extracted from the solution whereas for multiple samples with majority voting, a
grouping of solutions based on the answer is performed and the most frequent answer is chosen.
4.3 EXPERIMENTAL SETUP
**Informal solution generation. We few-shot prompt large language models to generate n = 64**
informal solutions per problem conditioned on the informal problem statement. We experiment with
the 8B, 62B and 540B Minerva models (Lewkowycz et al., 2022). We use the default sampling
configuration (T = 0.6, nucleus sampling (Holtzman et al., 2019) p = 0.95) reported in Lewkowycz
et al. (2022) for Minerva models.
**Statement formalization. We prepare 25 paired (informal statement, Isabelle formal statement)**
examples for each of the three categories from MATH dataset, GSM8K and MultiAirth datasets as
the candidates for few-shot demonstrations. When prompting the model to formalize an informal
statement, we randomly draw and permute 10 examples to form the few-shot prompt. We always
ensure the problem to be translated does not appear in the few-shot prompt. To reduce randomness of
statement formalization process, for each informal solution, we perform 10 statement formalization
attempts. All few-shot demonstration examples can be found in the supplementary materials.
**Solution formalization and verification. We further select 10 problems from each dataset and**
manually write complete formalization examples including both statements and solutions in the form
of (informal statement, formal statement, informal solution, formal solution sketch) as examples
for solution formalization. We randomly select 3 examples for each few-shot prompt. We query
the language model once for solution formalization for each formal statement sample so that each
informal solution is in total formalized 10 × 1 times. We then use Isabelle to verify the formal
solution sketch against the generated formal statement. Sledgehammer (Paulsson & Blanchette, 2012)
along with 11 common tactics (auto, simp, blast, fastforce, force, eval, presburger, sos,
arith, linarith, and auto simp: field simps) are used to close low-level open conjectures
in the formal solution sketch. We consider the formal solution and consequently the corresponding
original informal solution correct if Sledgehammer and tactics succeed. An informal solution is kept
if any of the 10 formalization attempts is verified successfully.
**Unfaithful statement filters. For the vacuous statement filter, we change the goal to be proved in**
generated formal statement to false. This is accomplished by replacing text after show clause to
_show false and checking whether Sledgehammer and tactics can prove the modified formal statement._
Statements proved this way are discarded due to being vacuous. To create the self-critique filter,
we take 5 faithful and 5 unfaithful statement formalization examples generated by the model in the
preliminary experiments as few-shot demonstrations. The unfaithful formalization examples are
also accompanied by what errors it have. This few shot demonstration is kept fixed throughout the
experiments. Generated formal statements are discarded if the likelihood of an answer of yes is lower
than no for the language model.
4.4 EXPERIMENTAL RESULTS
Table 1 shows the percentage of problems solved for baselines and DTV. The informal solutions are
generated with Minerva 62B model (Lewkowycz et al., 2022). We consider two autoformalization
model choices: Minerva 62B and GPT3.5 (OpenAI). Due to the API inference time and cost, we only
use GPT3.5 to generate the formal statements, which is the most crucial component in our approach.
Minerva 62B is queried for both solution formalization and self-critique filter.
-----
It can be seen that majority voting is a strong baseline that significantly improves the language
model performance over single sample generation from an average of 42.8% to 61.5%, matching
the observations in Wang et al. (2022); Lewkowycz et al. (2022). With autoformalization and
verification in the formal environment, DTV outperforms majority voting baseline and achieves an
average performance of 65.0% using the same Minerva 62B without any finetuning, suggesting the
effectiveness of autoformalization. Since Minerva was mainly trained on natural language math
content, its autoformalization capability could be limited. To this end, with the same informal
solutions generated by Minerva, we switch the statement formalization model to GPT3.5. This
leads to an even larger improvement of average solve rate to 68.2%. We observe that the boost in
performance is consistent across all datasets and categories of problems. A larger improvement could
potentially be obtained by switching solution formalization and self-critique filter to GPT3.5.
Table 1: Comparison of problem solve rate on GSM8K, MATH Number Theory, Algebra, Prealgebra
and MultiArith evaluation sets. DTV consistently outperforms the two baselines that do not perform
autoformalization. *To reduce the time and cost of repeatedly calling OpenAI APIs, GPT3.5 is only
used to generate formal statements.
Problem Solve Rate GSM8K Number Theory Algebra Prealgebra MultiArith Average
_Baselines_
Single Sample with Minerva 62B 48.6% 12.2% 33.3% 34.1% 85.7% 42.8%
Majority Voting with Minerva 62B 67.2% 23.7% 60.8% 59.2% 96.6% 61.5%
_Don’t Trust: Verify (DTV)_
DTV Formalization with Minerva 62B 71.4% 31.9% 61.8% 61.0% 98.8% 65.0%
DTV Formalization with GPT3.5* **79.4%** **36.1%** **63.2%** **63.4%** **99.0%** **68.2%**
4.5 ABLATION
**Size of informal solution models. Table 2 shows how baselines and DTV performs when varying**
the size of model that generates informal solutions. Specifically, we experiment with all three sizes
of the Minerva model: 8B, 62B and 540B on Number theory, GSM8K and MultiArith problems.
Not surprisingly, as the informal solution model scales up, both single sample and majority voting
baselines improve their performance, suggesting larger models have generally stronger quantitative
reasoning capability than smaller models. For DTV, we keep the formalization model the same as
in Table 1 and do not vary its size. DTV consistently outperforms the baselines by a large margin
across all three informal reasoning model sizes on all three datasets, indicating the scalability of DTV.
The improvement is particularly significant on Minerva 8B, almost doubling its problem solve rate
for GSM8K and number theory. Interestingly, the results also demonstrate that it is beneficial to
autoformalize informal solutions from a larger model (Minerva 540B) even with a weaker model
such as Minerva 62B, which opens up the possibility of reducing autoformalization runtime and cost.
Table 2: Problem solve rate on MATH Number Theory, GSM8K, MultiArith datasets. Each column
represents the model that generates informal solutions, ranging from Minerva 8B to Minerva 540B.
Baselines and DTV are shown on each row. DTV consistently outperforms baselines at different
model sizes. *To reduce the time and cost of repeatedly calling OpenAI APIs, GPT3.5 is only used to
generate formal statements.
Problem Solve Rate on Number Theory Minerva 8B Minerva 62B Minerva 540B
_Baselines_
Single Sample 7.1% 12.2% 19.1%
Majority Voting 13.5% 23.7% 36.1%
_Don’t Trust: Verify (DTV)_
DTV Formalization with Minerva 62B 23.0% 31.9% 40.2%
DTV Formalization with GPT3.5* **26.1%** **36.1%** **44.4%**
Problem Solve Rate on GSM8K Minerva 8B Minerva 62B Minerva 540B
_Baselines_
Single Sample 15.2% 48.6% 54.1%
Majority Voting 27.7% 67.2% 75.6%
_Don’t Trust: Verify (DTV)_
DTV Formalization with Minerva 62B **48.4%** **71.4%** **78.8%**
Problem Solve Rate on MultiArith Minerva 8B Minerva 62B Minerva 540B
_Baselines_
Single Sample 55.7% 85.7% 95.9%
Majority Voting 85.3% 96.6% 99.5%
_Don’t Trust: Verify (DTV)_
DTV Formalization with Minerva 62B **95.0%** **98.8%** **99.8%**
-----
**Effect of solution formalization and filters. For quantitative reasoning problems, it is possible**
to directly prove a translated formal statement with the automated theorem prover in absence of a
step-by-step solution sketch. This is because once the statement has been formalized correctly, the
proposition to be proved is not “far” from the assumption. Proving the proposition could just involve
simplification and evaluation that the automated theorem prover can be capable of. We observe that
by not asking DTV to formalize informal solutions, we can still achieve a strong problem solve
rate (see Table 3). However, in this case only the informal solution answer is checked by DTV and
correct formal statements that require elaborate formal solution steps will not be proved successfully.
As shown in Table 3, both filters that are used to detect unfaithful formalization are beneficial for
improving the problem solve rate.
Table 3: Problem solve rate on MATH Number theory, Algebra and Prealgebra categories with
three types of ablation. The results suggest that solution formalization and two statement filters are
beneficial for improving performance.
Problem Solve Rate Number Theory Algebra Prealgebra
DTV Formalization with Minerva 62B 31.9% 61.8% 61.0%
**– Formal Solution Sketch** 29.2% 60.7% 60.3%
**– Vacuous Statement Filter** 30.7% 61.4% 60.4%
**– Self-Critique Filter** 30.0% 60.9% 58.4%
4.6 QUALITATIVE ANALYSIS
**DTV translates problems that majority voting fails. In Figure 2, we show four case study examples**
that majority voting fails to solve due to the correct answer not being the most common answer.
DTV, however, successfully formalizes the informal statement and uses automated theorem prover to
prove these propositions, making the correct answer as the majority answer. Each example statement
formalized require clear and precise understanding of the informal statement and the language model
is capable of formalizing diverse types of problems. For example, in the first example, the model
needs to understand inclusion-exclusion relationship between different types of students and precisely
translates them into the formal environment. In Appendix A.1, we provide more case study examples
of statement formalization.
**DTV solves additional problems leveraging correct informal solution steps. In Figure 3, we**
showcase a number theory problem whose statement and informal solution are both formalized
correctly by the model. The formal statement itself is beyond the capability of automated theorem
prover and requires formal solution steps to solve it. It can be seen that by translating individual
informal solution steps correctly, the problem is successfully solved. Additionally, we observe that
for this problem, there are low-quality informal solutions that simply output a final answer without
justification and their corresponding formal solutions cannot prove the proposition. This suggests that
autoformalization could potentially be used to identify high-quality informal solutions. We provide
more examples in Appendix A.2.
4.7 LIMITATIONS AND FUTURE WORK
One limitation of our approach is that it requires the theorem proving environment to support the
problem domain in question. Currently, most environments are still limited in scope, a shortcoming
that DTV naturally inherits. For example, Isabelle has limited support for formalizing geometric and
probabilistic reasoning. Statements involving such arguments usually need to be built upon measure
theory and advanced calculus, which is far beyond the math required to solve these high school
problems. This is also the reason why these subsets in the MATH dataset have been excluded by
prior work (Zheng et al., 2021) and us. As more support in such areas becomes available, we expect
DTV to exhibit similar gains but a thorough evaluation is currently left as future work. Until then, for
practical purposes, one could imagine training a simple classifier that predicts if a particular problem
belongs to the domain that is well suited for DTV.
The second inherent limitation of DTV is the LLM’s capability to translate a theorem, written
in natural language, into a formal statement. Mistakes can be subtle. For example, 1 − 2 + 3
could be erroneously formalized to (1::nat) - 2 + 3, which evaluates to 3 instead of 2 in
Isabelle because a variable of the type nat cannot be negative (i.e. (1::nat) - 2 = 0). Future
-----
**Informal Statement from Number Theory: The least common multiple of two numbers is 3720, and**
their greatest common divisor is 8. Given that one of the numbers is 120, what is the other number?
Show that it is 248.
**Generated Informal Solution:**
The product of the two numbers is the product of their LCM and their GCD:
3720 · 8 = 29760.
Since one of the numbers is 120, we can divide this product by 120 to obtain the other number:
29760
= 248 .
120
Final Answer: The final answer is 248. I hope it is correct.
**Generated Formal Statement and Formal Solution:**
**theorem**
**fixes x y :: nat**
**assumes "lcm x y = 3720"**
**and "gcd x y = 8"**
**and "x = 120"**
**shows "y = 248"**
**proof -**
**(* The product of the two numbers is the product of their LCM and their GCD:**
**3720\cdot8=29760. *)**
**have "lcm x y * gcd x y = x * y"**
by auto [ATP]
then have "3720 * 8 = 120 * y" using assms
by fastforce [ATP]
**(* Since one of the numbers is 120, we can divide this product by 120**
**to obtain the other number: \frac{29760}{120}=\boxed{248}. *)**
then have "y = 29760/120"
by auto [ATP]
then show ?thesis
by auto [ATP]
**qed**
Figure 3: An example number theory problem of which both the informal statement and solution are
translated correctly by DTV. The generated formal statement cannot be directly solved by automated
theorem prover. By translating informal solution step by step, the formal statement is proved and the
answer is verified. The steps that end with [ATP] are generated by the automated theorem prover.
directions could explore employing more effective filters and leveraging reinforcement learning
from human feedback (RLHF) (Bai et al., 2022) to further finetune language models on aligned
mathematical formalizations directly.
5 CONCLUSION
In this paper, we show that by leveraging the autoformalization capability of large language models
through few-shot prompting, we can identify the correct answer among many informal solution
samples generated by the same large language models. Our approach, Don’t Trust: Verify, utilizes the
internal consistency of formal theorem proving environments to check for correct answers. DTV is
lightweight with no training or finetuning required. We demonstrate the feasibility and effectiveness
of DTV by reaching state-of-the-art performance on GSM8K, subsets of MATH and MultiArith
datasets. DTV consistently outperforms vanilla majority voting, the best previous approach, and leads
to improvement across different model sizes from 8B, 64B and 540B. DTV is also complementary to
different prompting methods such as Zheng et al. (2023); Fu et al. (2023) that only process reasoning
in informal domain. We seek to combine these approaches with DTV for future work.
-----
6 ACKNOWLEDGEMENT
We would like to thank Albert Jiang for his help and support in Isabelle[1]. JPZ is supported by grant
from the Natural Sciences and Engineering Research Council of Canada (NSERC) (567916). WL
was supported by the ERC Advanced Grant ALEXANDRIA (Project GA 742178).
REFERENCES
Ayush Agrawal, Siddhartha Gadgil, Navin Goyal, Ashvni Narayanan, and Anand Tadipatri. Towards a
mathematics formalisation assistant using large language models. arXiv preprint arXiv:2211.07524,
2022.
Zhangir Azerbayev, Bartosz Piotrowski, Hailey Schoelkopf, Edward W Ayers, Dragomir Radev, and
Jeremy Avigad. Proofnet: Autoformalizing and formally proving undergraduate-level mathematics.
_arXiv preprint arXiv:2302.12433, 2023._
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain,
Stanislav Fort, Deep Ganguli, Tom Henighan, et al. Training a helpful and harmless assistant with
reinforcement learning from human feedback. arXiv preprint arXiv:2204.05862, 2022.
Bruno Barras, Samuel Boutin, Cristina Cornes, Judicael Courant, Jean-Christophe Filliatre, Eduardo¨
Gimenez, Hugo Herbelin, Gerard Huet, Cesar Munoz, Chetan Murthy, et al. The Coq proof
_assistant reference manual: Version 6.1. PhD thesis, Inria, 1997._
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
Davide Castelvecchi et al. Mathematicians welcome computer-assisted proof in ‘grand unification’theory. Nature, 595(7865):18–19, 2021.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. Program of thoughts prompting:
Disentangling computation from reasoning for numerical reasoning tasks, 2022.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam
Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm:
Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve
math word problems. arXiv preprint arXiv:2110.14168, 2021.
Antonia Creswell, Murray Shanahan, and Irina Higgins. Selection-inference: Exploiting large
language models for interpretable logical reasoning, 2022.
Leonardo de Moura, Soonho Kong, Jeremy Avigad, Floris Van Doorn, and Jakob von Raumer. The
lean theorem prover (system description). In Automated Deduction-CADE-25: 25th International
_Conference on Automated Deduction, Berlin, Germany, August 1-7, 2015, Proceedings 25, pp._
378–388. Springer, 2015.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep
bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, and Tushar Khot. Complexity-based prompting
for multi-step reasoning, 2023.
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and
Graham Neubig. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435, 2022.
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song,
and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. arXiv
_preprint arXiv:2103.03874, 2021._
1https://github.com/albertqjiang/Portal-to-ISAbelle
-----
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text
degeneration. arXiv preprint arXiv:1904.09751, 2019.
Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large
language models. arXiv preprint arXiv:2303.05398, 2023.
Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane
Dwivedi-Yu, Armand Joulin, Sebastian Riedel, and Edouard Grave. Atlas: Few-shot learning with
retrieval augmented language models. arXiv preprint arXiv, 2208, 2022.
Albert Qiaochu Jiang, Wenda Li, Jesse Michael Han, and Yuhuai Wu. Lisa: Language models of
isabelle proofs. In 6th Conference on Artificial Intelligence and Theorem Proving, pp. 378–392,
2021.
Albert Qiaochu Jiang, Sean Welleck, Jin Peng Zhou, Wenda Li, Jiacheng Liu, Mateja Jamnik,
Timoth’ee Lacroix, Yuhuai Wu, and Guillaume Lample. Draft, Sketch, and Prove: Guiding formal
theorem provers with informal proofs. In International Conference on Learning Representations,
[2023. URL https://doi.org/10.48550/arXiv.2210.12283.](https://doi.org/10.48550/arXiv.2210.12283)
Jaehun Jung, Lianhui Qin, Sean Welleck, Faeze Brahman, Chandra Bhagavatula, Ronan Le Bras, and
Yejin Choi. Maieutic prompting: Logically consistent reasoning with recursive explanations. In
_Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp._
1266–1279, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational
[Linguistics. URL https://aclanthology.org/2022.emnlp-main.82.](https://aclanthology.org/2022.emnlp-main.82)
Aishwarya Kamath and Rajarshi Das. A survey on semantic parsing. arXiv preprint arXiv:1812.00978,
2018.
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish
Sabharwal. Decomposed prompting: A modular approach for solving complex tasks, 2023.
Gerwin Klein, Kevin Elphinstone, Gernot Heiser, June Andronick, David Cock, Philip Derrin,
Dhammika Elkaduwe, Kai Engelhardt, Rafal Kolanski, Michael Norrish, et al. sel4: Formal
verification of an os kernel. In Proceedings of the ACM SIGOPS 22nd symposium on Operating
_systems principles, pp. 207–220, 2009._
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large
language models are zero-shot reasoners. Advances in neural information processing systems, 35:
22199–22213, 2022.
Angeliki Lazaridou, Elena Gribovskaya, Wojciech Stokowiec, and Nikolai Grigorev. Internetaugmented language models through few-shot prompting for open-domain question answering.
_arXiv preprint arXiv:2203.05115, 2022._
Aitor Lewkowycz, Anders Johan Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski,
Vinay Venkatesh Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo,
Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave,
and Kyunghyun Cho (eds.), Advances in Neural Information Processing Systems, 2022. URL
[https://openreview.net/forum?id=IFXTZERXdM7.](https://openreview.net/forum?id=IFXTZERXdM7)
Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher
Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted
question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021.
Tobias Nipkow, Markus Wenzel, and Lawrence Charles Paulson. Isabelle/hol: A proof assistant for
higher-order logic. 2002.
[OpenAI. Introducing chatgpt. URL https://openai.com/blog/chatgpt.](https://openai.com/blog/chatgpt)
OpenAI. Gpt-4 technical report, 2023.
-----
Lawrence C Paulsson and Jasmin C Blanchette. Three years of experience with sledgehammer,
a practical link between automatic and interactive theorem provers. In Proceedings of the 8th
_International Workshop on the Implementation of Logics (IWIL-2010), Yogyakarta, Indonesia._
_EPiC, volume 2, 2012._
Subhro Roy and Dan Roth. Solving general arithmetic word problems. _arXiv preprint_
_arXiv:1608.01413, 2016._
William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan
Leike. Self-critiquing models for assisting human evaluators. arXiv preprint arXiv:2206.05802,
2022.
Timo Schick, Jane Dwivedi-Yu, Roberto Dess`ı, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer,
Nicola Cancedda, and Thomas Scialom. Toolformer: Language models can teach themselves to
use tools. arXiv preprint arXiv:2302.04761, 2023.
Weijia Shi, Sewon Min, Michihiro Yasunaga, Minjoon Seo, Rich James, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. Replug: Retrieval-augmented black-box language models. arXiv preprint
_arXiv:2301.12652, 2023._
Christian Szegedy. A promising path towards autoformalization and general artificial intelligence. In
_Intelligent Computer Mathematics: 13th International Conference, CICM 2020, Bertinoro, Italy,_
_July 26–31, 2020, Proceedings 13, pp. 3–20. Springer, 2020._
Qingxiang Wang, Chad Brown, Cezary Kaliszyk, and Josef Urban. Exploration of neural machine
translation in autoformalization of mathematics in mizar. In Proceedings of the 9th ACM SIGPLAN
_International Conference on Certified Programs and Proofs, pp. 85–98, 2020._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency
improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V
Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models.
In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances in
_[Neural Information Processing Systems, 2022. URL https://openreview.net/forum?](https://openreview.net/forum?id=_VjQlMeSB_J)_
[id=_VjQlMeSB_J.](https://openreview.net/forum?id=_VjQlMeSB_J)
Sean Welleck, Jiacheng Liu, Ximing Lu, Hannaneh Hajishirzi, and Yejin Choi. Naturalprover:
Grounded mathematical proof generation with language models, 2022.
Freek Wiedijk. Formal proof sketches. Lecture notes in computer science, 3085:378–393, 2004.
Freek Wiedijk. Formal proof–getting started. 2008.
Yuhuai Wu, Albert Qiaochu Jiang, Wenda Li, Markus Rabe, Charles Staats, Mateja Jamnik, and
Christian Szegedy. Autoformalization with large language models. Advances in Neural Information
_Processing Systems, 35:32353–32368, 2022._
Zhuosheng Zhang, Aston Zhang, Mu Li, and Alex Smola. Automatic chain of thought prompting in
large language models. arXiv preprint arXiv:2210.03493, 2022.
Chuanyang Zheng, Zhengying Liu, Enze Xie, Zhenguo Li, and Yu Li. Progressive-hint prompting
improves reasoning in large language models, 2023.
Kunhao Zheng, Jesse Michael Han, and Stanislas Polu. Minif2f: a cross-system benchmark for
formal olympiad-level mathematics. arXiv preprint arXiv:2109.00110, 2021.
-----
### APPENDIX
A ADDITIONAL CASE STUDY EXAMPLES
A.1 FAITHFUL STATEMENT FORMALIZATION
**Informal Statement: Gina has five pairs of white socks, three pairs of black socks, and two pairs of**
red socks. What percent of her socks are red? Show that it is 20%.
**Generated Formal Statement:**
**theorem**
**assumes "card whites = 10"**
**and "card blacks = 6"**
**and "card reds = 4"**
**shows "100 * (card reds) / (card whites + card blacks + card reds) = 20"**
**Informal Statement: What is the sum of the units digits of all the multiples of 3 between 0 and 50?**
Show that it is 78.
**Generated Formal Statement:**
**theorem**
"sum_list [x mod 10 . x \<leftarrow> [0..50], 3 dvd x] = (78::nat)"
**Informal Statement: Blake and Jenny each took four 100-point tests. Blake averaged 78 on the four**
tests. Jenny scored 10 points higher than Blake on the first test, 10 points lower than him on the second
test, and 20 points higher on both the third and fourth tests. What is the difference between Jenny’s
average and Blake’s average on these four tests? Show that it is 10.
**Generated Formal Statement:**
**theorem**
**fixes b1 b2 b3 b4 j1 j2 j3 j4 :: real**
**assumes "b1 + b2 + b3 + b4 = 4 * 78"**
**and "j1 = b1 + 10"**
**and "j2 = b2 - 10"**
**and "j3 = b3 + 20"**
**and "j4 = b4 + 20"**
**shows "(j1 + j2 + j3 + j4)/4 - (b1 + b2 + b3 + b4)/4 = 10"**
-----
**Informal Statement: What is the perimeter, in units, of a rhombus if its area is 120 square units and**
one diagonal is 10 units? Show that it is 52.
**Generated Formal Statement:**
**theorem**
**fixes d1 d2 :: real**
**assumes "d1 > 0"**
**and "d2 > 0"**
**and "d1 = 10"**
**and "d1 * d2 / 2 = 120"**
**shows "2 * sqrt ((d1/2)ˆ2 + (d2/2)ˆ2) * 2 = 52"**
**Informal Statement: Sam is carrying a 2 gallon bucket of milk to his house from the barn, which is 3**
miles away from his house. However, the bucket has a leak in it. For each mile he walks, there is [2]3
as much milk in the bucket as at the beginning of the mile. How many gallons of milk will be in the
bucket when Sam gets home? Show that it is [16]27 [.]
**Generated Formal Statement:**
**theorem**
**fixes x :: real**
**assumes "x = (2/3)ˆ3 * 2"**
**shows "x = 16/27"**
**Informal Statement: Expand (2x[5]** + 3x[2])(x[4] _−_ 4x[2] + 3x − 8). Show that it is 2x[9] _−_ 8x[7] + 9x[6] _−_
16x[5] _−_ 12x[4] + 9x[3] _−_ 24x[2].
**Generated Formal Statement:**
**theorem**
**fixes x :: real**
**shows "(2 * xˆ5 + 3 * xˆ2) * (xˆ4 - 4 * xˆ2 + 3 * x - 8)**
= 2 * xˆ9 - 8 * xˆ7 + 9 * xˆ6 - 16 * xˆ5
- 12 * xˆ4 + 9 * xˆ3 - 24 * xˆ2"
**Informal Statement: The function f** (n) = n[2] + n + 17 for 0 ≤ _n ≤_ 15 generates prime numbers.
What is the value of f (10) − _f_ (9)? Show that it is 20.
**Generated Formal Statement:**
**theorem**
**fixes n :: nat**
**assumes "\<forall> n \<in> {0..15}. prime (nˆ2 + n + 17)"**
**shows "((10::nat)ˆ2 + 10 + 17) - ((9::nat)ˆ2 + 9 + 17) = 20"**
**Informal Statement: Reversing the digits in John’s age results in his father’s age. The positive**
difference between their ages is 27 years. If the sum of the digits in each age is 5, what is the number of
years in John’s father’s age? Show that it is 41.
**Generated Formal Statement:**
**theorem**
**fixes j f j1 f1 :: nat**
**assumes "j = 10 * j1 + f1"**
**and "f = 10 * f1 + j1"**
**and "f - j = 27"**
**and "j1 + f1 = 5"**
**shows "f = 41"**
-----
**Informal Statement: It took Lara five days to read a novel. Each day after the first day, Lara read half**
as many pages as the day before. If the novel was 248 pages long, how many pages did she read on the
first day? Show that it is 128.
**Generated Formal Statement:**
**theorem**
**fixes p1 p2 p3 p4 p5 :: nat**
**assumes "p1 + p2 + p3 + p4 + p5 = 248"**
**and "p2 = p1 / 2"**
**and "p3 = p2 / 2"**
**and "p4 = p3 / 2"**
**and "p5 = p4 / 2"**
**shows "p1 = 128"**
**Informal Statement: Alex, Bob, Camille, and Danielle’s mothers are comparing their children’s ages.**
They observe that the sum of Alex, Bob, and Danielle’s ages is fourteen times Camille’s age. They also
note that the sum of Alex and Bob’s ages is six times Camille’s age, and Bob’s age is two years less
than the difference in ages of Danielle and Alex. How old is Camille? Show that it is 1.
**Generated Formal Statement:**
**theorem**
**fixes a b c d :: nat**
**assumes "a > 0"**
**and "b > 0"**
**and "c > 0"**
**and "d > 0"**
**and "a + b + d = 14 * c"**
**and "a + b = 6 * c"**
**and "b = (d - a) - 2"**
**shows "c = 1"**
Figure 4: Additional examples of faithful formal statements translated from informal statements
with correct answers by DTV. Majority voting failed to solve these problems but DTV solves them
successfully. The model is capable of translating complex informal statements precisely. Note that the
”Show that it is [answer]” is appended to the original informal problem statement by first extracting
the answer from informal solution.
-----
A.2 FAITHFUL SOLUTION FORMALIZATION
**Informal Statement: Kailin’s favorite restaurant has a special offer on a three-course meal. A meal**
consists of an appetizer, an entree and a dessert. The menu features five appetizers, eight entrees and
four desserts. How many different three-course meals can be formed with these choices? Show that it is
160.
**Generated Informal Solution:**
We have to choose one of the five appetizers, one of the eight entrees, and one of the four desserts. We
have 5 choices for the appetizers, 8 choices for the entrees, and 4 choices for the dessert. Therefore, the
number of possible meals is 5 · 8 · 4 = 160. Final Answer: The final answer is 160. I hope it is correct.
**Generated Formal Statement and Formal Solution:**
**theorem**
**assumes "card appetizers = 5"**
**and "card entrees = 8"**
**and "card desserts = 4"**
**shows "card (appetizers \<times> entrees \<times> desserts) = 160"**
**proof -**
**(* We have to choose one of the five appetizers, one of the eight entrees,**
**and one of the four desserts. *)**
**have "card (appetizers \<times> entrees \<times> desserts)**
= card appetizers * card entrees * card desserts"
by (simp add: card_cartesian_product)
**(* Therefore, the number of possible meals is 5 · 8 · 4 = 160. *)**
then have "card (appetizers \<times> entrees \<times> desserts)
= 5 * 8 * 4" using assms
by auto [ATP]
**also have "\<dots> = 160"**
by simp [ATP]
**(* Final Answer: The final answer is 160. I hope it is correct. *)**
**finally show ?thesis**
by auto [ATP]
**qed**
-----
**Informal Statement: Simplify** _√192. Show that it is 8√3._
**Generated Informal Solution:**
We can factor out a perfect square from 192: 192 = 64 · 3 = 8[2] _· 3. Thus, we have_ _√192 =_ _√8[2]_ _· 3 =_
8√3. Final Answer: The final answer is 8√3. I hope it is correct.
**Generated Formal Statement and Formal Solution:**
**theorem**
**fixes x :: real**
**assumes "x = 192"**
**shows "sqrt x = 8 * sqrt (3::real)"**
**proof -**
**(* We can factor out a perfect square from 192: 192 = 64 · 3 = 8[2]** _· 3._
**Thus, we have** _√192 =_ _√8[2]_ _· 3 = 8√3. *)_
**have "x = 64 * 3" unfolding assms**
by simp [ATP]
then have "sqrt x = sqrt (64 * 3)"
by simp [ATP]
**also have "\<dots> = sqrt 64 * sqrt 3"**
**using real_sqrt_mult by blast [ATP]**
**also have "\<dots> = 8 * sqrt 3"**
by simp [ATP]
**finally show ?thesis**
by simp [ATP]
**qed**
**Informal Statement: What is 3[−][4]** _· 3[−][8]_ _· 3[14]? Show that it is 9._
**Generated Informal Solution:**
By the product rule for exponents, we have that
3[−][4] _· 3[−][8]_ _· 3[14]_ = 3[(][−][4)+(][−][8)+14] = 3[2] = 9 .
Final Answer: The final answer is 9. I hope it is correct.
**Generated Formal Statement and Formal Solution:**
**theorem**
**fixes x y z :: real**
**assumes "x = 3 powr (-4)"**
**and "y = 3 powr (-8)"**
**and "z = 3 powr 14"**
**shows "x * y * z = 9"**
**proof -**
**(* By the product rule for exponents, we have that *)**
**have "x * y * z = 3 powr (-4) * 3 powr (-8) * 3 powr 14"**
by (smt (z3) assms(1) assms(2) assms(3)) [ATP]
**(* 3ˆ{-4} \cdot 3ˆ{-8} \cdot 3ˆ{14} = 3ˆ{(-4)+(-8)+14} = 3ˆ{2} = \boxed{9}. *)**
**also have "\<dots> = 3 powr (-4 + -8 + 14)"**
by eval [ATP]
**also have "\<dots> = 3 powr 2"**
by auto [ATP]
**also have "\<dots> = 9"**
by auto [ATP]
**(* Final Answer: The final answer is 9. I hope it is correct. *)**
**finally show ?thesis**
by auto [ATP]
**qed**
-----
**Informal Statement: Simplify: 3!(2[3]** + _√9) ÷ 2. Show that it is 33._
**Generated Informal Solution:**
First, we simplify the expressions inside the parentheses and the square root.
2[3] + _√9 = 8 + 3 = 11_
Next, we simplify the factorial.
3! = 3 · 2 · 1 = 6
Now, we can combine the three expressions.
3!(2[3] + _√9) ÷ 2 = 6 · 11 ÷ 2 = 33_
Final Answer: The final answer is 33. I hope it is correct.
**Generated Formal Statement and Formal Solution:**
**theorem**
"fact 3 * (2ˆ3 + sqrt 9) / 2 = 33"
**proof -**
**(* First, we simplify the expressions inside the parentheses**
**and the square root. *)**
**have "2ˆ3 + sqrt 9 = 8 + 3"**
by auto [ATP]
**(* Next, we simplify the factorial. *)**
**have "fact 3 = 3 * 2"**
by (smt (z3) fact_2 fact_numeral pred_numeral_simps(3)) [ATP]
**(* Final Answer: The final answer is 33. I hope it is correct. *)**
then show ?thesis
by auto [ATP]
**qed**
Figure 5: Additional example problems of which both the informal statement and solution are translated correctly by DTV. The generated formal statement cannot be directly solved by an automated
theorem prover. By translating the informal solution step by step, the formal statement is proved
and the answer is verified. The steps that end with [ATP] are generated by the automated theorem
prover.
-----
B FEW-SHOT PROMPT DATA
We provide few-shot prompts used in the experiments in the supplementary material zip file. Below
is a few-shot prompt example for statement formalization.
**Informal Statement: Evaluate log2 64. Show that it is 6.**
**Formal Statement:**
**theorem**
**shows "log 2 64 = 6"**
**Informal Statement: What is the distance between the points with coordinates (−5, 5) and**
(5, −5)? Express your answer in simplest radical form. Show that it is 10√2.
**Formal Statement:**
**theorem**
**fixes x1 x2 y1 y2 :: real**
**assumes "(x1, y1) = (-5, 5)" and "(x2, y2) = (5, -5)"**
**shows "sqrt ((y2 - y1)ˆ2 + (x2 - x1)ˆ2) = 10 * sqrt(2)"**
**Informal Statement: If three flicks are equivalent to eight flecks, and six flocks are equivalent**
to four flecks, how many flocks are equivalent to 12 flicks? Show that it is 48.
**Formal Statement:**
**theorem**
**fixes flick fleck flock :: real**
**assumes "flick > 0" and "flock > 0" and "fleck > 0"**
**assumes "3 * flick = 8 * fleck"**
**and "6 * flock = 4 * fleck"**
**shows "48 * flock = 12 * flick"**
**...**
**_<Informal statement to be formalized here>_**
C ADDITIONAL DISCUSSION
**DTV with GPT3.5. In Table 4, we additionally evaluate the performance of DTV when the informal**
solutions and their formalization are both generated from GPT3.5. Due to the cost of OpenAI API,
we evaluate only on the MATH Number Theory category. It can be seen that by leveraging stronger
models like GPT3.5, the performance of DTV can be further improved.
Problem Solve Rate Minerva 62B GPT-3.5
Single Sample 12.2% 25.0%
Majority Voting 23.7% 41.0%
DTV Formalization with Minerva 62B 31.9% -
DTV Formalization with GPT-3.5 **36.1%** **49.4%**
Table 4: Comparison of problem solve rates
**DTV performance with number of samples. In the experiment section, we use 64 samples per**
problem for majority voting and DTV. In Figure 6, we plot the performance of majority voting and
DTV against number of samples per problem on Number Theory. It can be seen that both majority
voting and DTV improve as number of samples increase and DTV significantly outperform majority
voting when the number of samples is around 20.
**Informal answer extraction. To facilitate answer extraction, we include the following answer format**
in the few-shot prompt examples: The final answer is [placeholder]. and use regular expressions to
extract the answer. In certain scenarios, extraction might be more challenging and not only restricted
to the final answer. In this case, we could leverage various semantic parsing methods (Kamath & Das,
-----
Problem Solve Rate of on Number Theory Problems
0.30
0.25
0.20
0.15
Problem Solve Rate
0.10
0.05
Majority Voting with Minerva 62B
DTV Formalization with Minerva 62B
0.00
0 10 20 30 40 50 60
Number of Samples
Figure 6: Comparison of majority voting and DTV.
2018) and possibly use a language model to extract the informal reasoning as seen in Kojima et al.
(2022) as well.
**Potential improvement of filters. In the main text, we discuss and experiment with one symbolic**
filter that filters out vacuous formal statements. The filter can be leveraged to detect unfaithful
statement translations. It is also possible to expand this into a set of filters that attempt to detect trivial
formal statements, which are very likely to be unfaithful translations. One type of trivial statements
could have the same goal appearing in the assumption already, which makes it provable immediately
following the assumption. Another type could have the left-hand side of the goal being the same as
that of right-hand side. Implementing these filters could potentially further improve the performance
at a diminishing rate.
**DTV helps filter incorrect answer solutions. Besides problem solve rate, we study to what extent**
DTV can reject incorrect answer solutions present in the informal solution samples. As seen in Table
5, without autoformalization, the portion of correct answer solutions from Minerva 62B is between
24.3% and 33.9% for different categories. By performing autoformalization and only considering
informal solutions whose formal counterparts are proved, the percentage of correct answer solutions
increase by more than 20%, indicating the effectiveness of our approach in identifying correct answer
solutions. GPT3.5 is better than Minerva 62B at this, which also supports the fact that GPT3.5 leads
to higher problem solve rate in Table 1.
Table 5: Proportion of correct answer solutions generated by Minerva 62B before and after performing
autoformalization. DTV is effective in discarding incorrect answer solutions and lead to better
performance.
Correct Answer Rate Number Theory Algebra Prealgebra
No autoformalization 24.3% 31.0% 33.9%
DTV Formalization with Minerva 62B 50.5% 45.6% 45.2%
DTV Formalization with GPT3.5 62.3% 56.1% 54.9%
-----
| [
"Wenda, Li",
"Yuhuai, Wu",
"Jin Peng, Zhou",
"Christian, Szegedy",
"Charles, Staats",
"Kilian Q., Weinberger"
] | 2024-03-26T00:00:00 | ICLR 2024 | true | 0 | 0 | [
"Isabelle"
] | http://arxiv.org/abs/2403.18120 | https://arxiv.org/abs/2403.18120 | null |
Dreaming to Prove | N/A | null | null | [
"Szabó, Kristóf"
] | 2021-01-01T00:00:00 | null | false | 0 | 0 | null | http://aitp-conference.org/2021/abstract/paper_25.pdf | null | null |
DyVal: Graph-informed Dynamic Evaluation of Large Language Models | Large language models (LLMs) have achieved remarkable performance in various evaluation benchmarks. However, concerns about their performance are raised on potential data contamination in their considerable volume of training corpus. Moreover, the static nature and fixed complexity of current benchmarks may inadequately gauge the advancing capabilities of LLMs. In this paper, we introduce DyVal, a novel, general, and flexible evaluation protocol for dynamic evaluation of LLMs. Based on our proposed dynamic evaluation framework, we build graph-informed DyVal by leveraging the structural advantage of directed acyclic graphs to dynamically generate evaluation samples with controllable complexities. DyVal generates challenging evaluation sets on reasoning tasks including mathematics, logical reasoning, and algorithm problems. We evaluate various LLMs ranging from Flan-T5-large to ChatGPT and GPT4. Experiments demonstrate that LLMs perform worse in DyVal-generated evaluation samples with different complexities, emphasizing the significance of dynamic evaluation. We also analyze the failure cases and results of different prompting methods. Moreover, DyVal-generated samples are not only evaluation sets, but also helpful data for fine-tuning to improve the performance of LLMs on existing benchmarks. We hope that DyVal can shed light on the future evaluation research of LLMs. | null | ## DYVAL: DYNAMIC EVALUATION OF LARGE LANGUAGE MODELS FOR REASONING TASKS
**Kaijie Zhu[1][∗], Jiaao Chen[2][∗], Jindong Wang[1][†], Neil Zhenqiang Gong[3], Diyi Yang[4], Xing Xie[1]**
1Microsoft Research, 2Georgia Tech, 3Duke University, 4Stanford University
ABSTRACT
Large language models (LLMs) have achieved remarkable performance in various evaluation benchmarks. However, concerns are raised about potential data
contamination in their considerable volume of training corpus. Moreover, the
static nature and fixed complexity of current benchmarks may inadequately gauge
the advancing capabilities of LLMs. In this paper, we introduce DYVAL, a general and flexible protocol for dynamic evaluation of LLMs. Based on our framework, we build graph-informed DYVAL by leveraging the structural advantage
of directed acyclic graphs to dynamically generate evaluation samples with controllable complexities. DYVAL generates challenging evaluation sets on reasoning
tasks including mathematics, logical reasoning, and algorithm problems. We evaluate various LLMs ranging from Flan-T5-large to GPT-3.5-Turbo and GPT-4. Experiments show that LLMs perform worse in DYVAL-generated evaluation samples with different complexities, highlighting the significance of dynamic evaluation. We also analyze the failure cases and results of different prompting methods.
Moreover, DYVAL-generated samples are not only evaluation sets, but also helpful
data for fine-tuning to improve the performance of LLMs on existing benchmarks.
We hope that DYVAL can shed light on future evaluation research of LLMs. Code
[is available at: https://github.com/microsoft/promptbench.](https://github.com/microsoft/promptbench)
1 INTRODUCTION
Large Language Models (LLMs) have recently achieved unprecedented performance across diverse
tasks (OpenAI, 2023b; Bubeck et al., 2023). The great endeavor have led to positive speculation on
the possibility of LLMs being precursors of artificial general intelligence, necessitating the creation
of nuanced evaluations. By pinpointing gaps for improvements, evaluation becomes the bedrock
that enhances the understanding of current models and ensures AI’s continued progression.
Efforts to evaluate LLMs have become intensified significantly. Liang et al. (2023) introduced
HELM, which offers a holistic assessment of LLM in various scenarios. Similarly, Chatbot Arena
(Zheng et al., 2023) evaluates LLMs by contrasting their generated output. Other benchmarks that
have set the standard in the realm of LLM evaluations include AlpacaEval (Li et al., 2023c), C-Eval
(Huang et al., 2023), ARB (Sawada et al., 2023), API-Bank (Li et al., 2023a), Socket (Choi et al.,
2023), and Big-Bench (bench authors, 2023). Moreover, manual experiments have emerged as a
complementary approach to these benchmarks, with works such as Bubeck et al. (2023) and Bang
et al. (2023). Complementing these, human evaluators have also been instrumental in gauging the
prowess of LLMs, as discussed by Ziems et al. (2023) and Zeˇcevi´c et al. (2023).
Current evaluation benchmarks face two fundamental challenges. First, data contamination. Many
benchmarks source their data from the Internet, causing potential overlap with the vast corpus on
which LLMs are trained, leading to the debate of “Generalization vs. Memorization” (Bender et al.,
2021; Magar & Schwartz, 2022; Carlini et al., 2023; Biderman et al., 2023): Are the model’s re_sults stemming from genuine ability or just memorization of the training data? A recent example is_
provided by Zeˇcevi´c et al. (2023): LLMs can ambiguously deduce the conclusion that altitude influences temperature based on seen data. Similarly, Berglund et al. (2023) found that LLMs trained
_∗Equal contribution. Contact: [email protected], [email protected]._
_†Correspondence to: Jindong Wang <[email protected]>._
-----
on “A is B” fail to infer “B is A”, which doubts the abilities of LLMs might come from memorization. Second, static dataset and fixed complexity. As LLMs progress at a rapid pace, existing
datasets usually fail to match the models’ ever-evolving capabilities, because the complexity level of
existing benchmarks is usually static and fixed. As Dziri et al. (2023) demonstrated, while handling
simple problems pretty well, LLMs fail to solve complex problems. The inability to automatically
and dynamically increase complexity levels based on existing data prevents static benchmarks from
being adapted to accurately select, compare, and advance LLMs. Although there are a few existing
dynamic benchmarks like DynaBench (Kiela et al., 2021) and DynaBoard (Ma et al., 2021), they
rely on crowd-sourcing efforts for data collection, which might be expensive and tedious.
In this paper, we introduce DYVAL—a novel, general, and flexible evaluation protocol for the dy_namic evaluation of LLMs (Sec. 3.1). The core of DYVAL is to dynamically generate evaluation_
samples on the fly instead of collecting a fixed set of data. DYVAL consists of three components:
1) the generation algorithm G to generate test samples with diversities; 2) the constraint C to modulate sample complexity and validity; and 3) the description function F to translate the generated
samples into natural languages. Based on this framework, we propose a graph-informed DYVAL
(Sec. 3.2, Figure 1) to generate data using graphs. Specifically, inspired by techniques such as the
compiler principle (Alfred V et al., 2007) and parsing trees which decompose complexities (Klein
& Manning, 2003; Vinyals et al., 2015), we employ directed acyclic graphs (DAG) (Thulasiraman
& Swamy, 2011) to compose fundamental elements into more intricate problems, with each unit
symbolized as a graph node. The extendable and stochastic nature of graph generation effectively
regulates the complexity levels. Additionally, the hierarchical attributes of graphs suit them for
multi-step inferential tasks like logics. Problems generated by DYVAL not only require profound
understanding of problem solving rather than simple memorization but also echo the human approach to incremental problem-solving and solution derivation. Being general and flexible, DYVAL
co-exists and co-evolves with existing benchmarks for better LLMs evaluation and evolution.
We leverage DYVAL to synthesize 7 reasoning tasks[1], encompassing: (1) Mathematics: arithmetic
and linear equations; (2) Logical reasoning: boolean, deductive, and abductive logic; (3) Algorithm:
reachability and maximum sum path problems. We then re-examine the state-of-the-art LLMs ranging from Flan-T5-large (Chung et al., 2022), phi-1.5 (Li et al., 2023d), Xwin-13B (Team, 2023),
Llama2-13B-chat (Touvron et al., 2023), Vicuna-13B-v1.3 (Chiang et al., 2023), WizardMath13B (Luo et al., 2023), to GPT-3.5-Turbo (OpenAI, 2023a) and GPT-4 (OpenAI, 2023b) with DYVAL. We also test with recent prompting techniques including Few-shot (Brown et al., 2020), CoT
(Wei et al., 2022), Least to Most prompting (Zhou et al., 2023b), Automatic Prompt Engineering
(Zhou et al., 2023d), and Skills-in-Context prompting (Chen et al., 2023). Finally, we perform
human study involving 82 human evaluators for comparison and fine-tuning experiments using DYVAL-generated evaluation samples. Furthermore, experiments on existing benchmarks also show
that fine-tuning LLMs with data generated by DYVAL could directly improve models’ abilities without extra careful collection of training data (Zhou et al., 2023a). We further show the flexibility of
DYVAL by extending it to natural language tasks in Appendix H. Our key findings are:
- Results on DYVAL evaluation are not always consistent with those on existing benchmarks,
**indicating possible low training data quality and/or data contamination of existing LLMs**
(Sec. 4.2). For instance, phi-1.5, WizardMath-13B, and Xwin-13B perform poorly on DYVAL
while claiming huge improvements on existing benchmarks.
- As difficulty increases, LLMs tend to perform worse and their performance gap becomes
**larger, emphasizing the lack of compositionality of current LLMs and the importance of**
**evolving complexity evaluations (Sec. 4.2).**
- Our error analysis based on DYVAL evaluation exhibits various failure patterns which shed
light on how to further improve LLMs. (Sec. 4.3).
- No prompt engineering methods can perform best in all of our evaluation sets; and larger
**model sizes tend to achieve better performances (Sec. 4.4).**
- DYVAL can further be utilized to generate training data to improve the abilities of LLMs.
(Sec. 5). For instance, fine-tuning the Llama2 models with our DYVAL generated data demonstrates enhanced results on 6 existing benchmarks.
1We choose reasoning tasks mainly due to (1) the intrinsic connection between reasoning proficiency and
intelligence; (2) the notable progress LLMs have achieved in reasoning-centric tasks (Sawada et al., 2023).
Note that DYVAL could also be applied to existing benchmarks to create new and harder evaluation data.
-----
To sum up, this paper makes the following contributions:
- A dynamic evaluation protocol. DYVAL is a dynamic evaluation protocol designed to generate test samples dynamically, mitigating the issues of data contamination and static complexity.
- A graph-informed DYVAL algorithm for evaluation of the reasoning abilities of LLMs. We
use DAGs to compose 7 reasoning problems from mathematics, logical reasoning to algorithms.
- Extensive experiments and analysis. We conduct extensive experiments to provide insights
for evaluating and improving LLMs.
2 RELATED WORK
**Evaluating LLMs.** While neural networks are recognized as the universal function approximators
(Cybenko, 1989) with remarkable data fitting capabilities (Zhang et al., 2021; Arpit et al., 2017), debates (Bender et al., 2021; Zhang et al., 2021; T¨anzer et al., 2022; Magar & Schwartz, 2022; Carlini
et al., 2023; Wu et al., 2023; Tang et al., 2023; Zeˇcevi´c et al., 2023; Koco´n et al., 2023; Schaeffer,
2023; Biderman et al., 2023; Zhu & Li, 2023) persist regarding the true nature of LLMs’ generalization abilities. The growing prominence of LLMs necessitates rigorous benchmarks (Hendrycks
et al., 2021; Li et al., 2023b; Zhong et al., 2023; HuggingFace, 2023). Recent trends include: (1)
human-centric evaluations (Gao et al., 2022; Ribeiro & Lundberg, 2022), (2) crowd-sourced testing
(Kiela et al., 2021; Ma et al., 2021), and (3) specialized task challenges (Liang et al., 2023; Tian
et al., 2018; Ribeiro et al., 2020; bench authors, 2023). Complementing with these, our DYVAL
introduces a dynamic evaluation system, consistently relevant in the swiftly evolving landscape of
AI. Although Krause et al. (2018) introduced the term “dynamic evaluation”, our DYVAL differs
considerably in its approach and goals. Specifically, reasoning is widely recognized as the core of
both human and AI. Our focus on constructing reasoning tasks mirrors the intricate and multi-step
nature of human reasoning (Brody, 1999; Lohman & Lakin, 2011; Sawada et al., 2023), building
reasoning benchmarks is a critical step to help LLMs towards intelligence.
**Data Contamination.** Researchers start to realize the potential data contamination problem in
LLMs (Lovin, 2023; Chowdhuri et al., 2023; Bender et al., 2021; Koco´n et al., 2023). The GPT-4 and
LLama reports clearly stated the phenomenon of data contamination. Recently, Zhou et al. (2023c)
discussed the risks and impacts of data contamination of evaluation benchmarks in assessing LLMs.
Li (2023) examined the data contamination problem of LLama models. The Skywork LLM Wei et al.
(2023) again demonstrated the data contamination issue in several. Golchin & Surdeanu (2023a;b);
Oren et al. (2023); Yang et al. (2023b) designed novel methods to detect the data contamination of
LLMs. DYVAL is not a detection approach but a new protocol to mitigate the contamination issue.
**Complex-to-simple problem decomposition and evaluation set construction.** Employing
_graphs to deconstruct complex tasks has been an enduring and effective strategy across domains._
Compilers, as seen in computational theory (Alfred V et al., 2007), effectively break down highlevel constructs, while in NLP, parsing trees bring clarity to intricate syntactic and semantic structures (Klein & Manning, 2003; Vinyals et al., 2015). Roy & Roth (2015) displayed the potency of
this method in arithmetic, using trees for solving multi-step problems. Additionally, several contemporary techniques have implored LLMs to decompose complex problems (Wei et al., 2022; Zhou
et al., 2023b; Khot et al., 2022; Zhang et al., 2023). Several studies have leveraged graph-based approaches for constructing compositional tasks, particularly in the domains of first-order logic (Sinha
et al., 2019; Clark et al., 2020; Tian et al., 2021) and causal reasoning (Jin et al., 2023). DYVAL
presents notable distinctions in both objective and methodology. Additionally, GraphWorld (Palowitch et al., 2022) primarily benchmarks Graph Neural Networks (GNNs), whereas DYVAL focuses
on LLMs using the graph structure. They are different in nature.
3 DYVAL
In this section, we first elucidate our general dynamic evaluation protocol to address the challenges
of data contamination with dynamic data generation and controllable complexity in Sec. 3.1. We
then adapt this general protocol for reasoning tasks by leveraging the Directed Acyclic Graphs
(DAGs) in Sec. 3.2. More analysis of the flexibility of DYVAL is in Sec. 3.3.
-----
|Constraint|Col2|Generation Algorithm|Description Function|Col5|
|---|---|---|---|---|
|Task constraint Arithmetic Nonzero dividend, Overflow, … Linear Eq. Unique solution, …… Reachability Connected, …|Complexity constraint Tree-based DAG Depth, Width, Add extra links, … General DAG Num nodes, Num links, …|F Tree-based D E DAG A B C|Task Description Function Arithmetic What is the value of [root]? Linear Eq. What is the value of x, y? … Reachability Can [Node A] be reached from [Node B]?|DAG Description Function Tree-based DAG A is [Value]. D get its value by [Operation] A and B. … General DAG A points to None. B points to A. …|
|||F G General E D DAG A B C Node: Name, Operation, Value, Children Links: relationship between two nodes|||
|Step 1: Specify the constraint for DAG and task.|Col2|Col3|Col4|Col5|Step 2: Generate DAG with constraint.|Step 3: Describe DAG and task.|
|---|---|---|---|---|---|---|
||Arithmetic 𝓒𝑻|||Tree-based DAG|+ I F ∗𝟐 ∗ G − H Tree-based DAG 3 1 4 7 2 A B C D E|A’s value is 3, B’s value is 1, … F’ value is derived by squaringvalue of A, … I’svalue is derived by summing the value of F,G,H What is the value of I?|
||Nonzero dividend, Nonzero square root, Avoid overflow, …||||||
Figure 1: The pipeline of the graph-informed DYVAL. Up: the general evaluation framework; down:
an arithmetic example. More details can be found at Sec. 3.2 and Appendix B.
3.1 GENERAL DYNAMIC EVALUATION DESCRIPTION LANGUAGE
First, we introduce the general description language of the dynamic evaluation protocol. Given
a task T, a dynamic evaluation algorithm is formulated as AT = F(G(C)), where (1) G is the
**sample generation algorithm, incorporating randomness to guarantee uniqueness of each sample.**
Randomness may vary in different tasks, such as the numbers in math problems and the logic chains
in a logic reasoning task. (2) = _T,_ denotes constraints on, where _T is the task constraint_
_C_ _{C_ _CG}_ _G_ _C_
for task T such as the legality guarantee of the generated samples in the context of the task. is the
_CG_
complexity constraint for the generation process, such as the sampling strategy for the value at each
node and the number of perturbations added to the evaluation samples. (3) = _T,_ is the
_F_ _{F_ _FG}_
**description function to translate the raw evaluation samples generated by G into natural language**
descriptions. elucidates the characteristics and properties of the samples generated by . _T is_
_FG_ _G_ _F_
the description for task T such as task objective and expected outcomes.
In general, an evaluation sample can be represented as deval = _T (_ ( ( _,_ _T ))) using the above_
_F_ _FG_ _G_ _CG_ _C_
description language. first produces a sample that adheres to the complexity constraint and
_G_ _CG_
the task constraint _T . Then it undergoes transformation by description function_ into a natural
_C_ _FG_
language format and finally goes through the task description function FT . The description language
above naturally (1) avoids data contamination by dynamic generation through G, and (2) promises
dynamic datasets and controllable complexity through C. Specifically, by varying the constraints in
_C, we can generate evaluation samples of different difficulties, allowing “co-evolution” of both the_
LLMs and the evaluation process. The description language is flexible since it allows for different
generation algorithms and complexity control by changing G and C accordingly.
3.2 GRAPH-INFORMED DYNAMIC EVALUATION FOR REASONING TASKS
In this section, following the general evaluation description language, we implement DYVAL for
reasoning tasks by taking inspiration from the graph structure. Given the intrinsic multistep inferential nature of reasoning tasks, they inherently exhibit structural characteristics, making directed
acyclic graphs (DAGs) a natural choice for modeling these tasks. DAGs also facilitate dynamic sample generation by modulating the internal structure and fine-grained control over problem difficulty
by adjusting the structural complexity. More background of DAGs can be found in Appendix A.
3.2.1 GENERATION ALGORITHM G: DAG CONSTRUCTION
The generation algorithm is established on the graph construction process. We categorize DAGs
as Tree-based DAGs (T-DAGs) and General DAGs (G-DAGs), illustrated in Figure 1. T-DAGs are
inherently hierarchical, making them suitable for tasks that proceed from a set of initial premises to
-----
Table 1: Three types of reasoning tasks generated by DYVAL.
Generation Constraint
Field Task algorithm G _CT_ _C_ _CG_ # Classes Description F
: 1, 2, . . ., 10 Depth, Width, What is the
Arithmetic Tree-based _V :_ _{+,_ _,_ _,_ _,_ _}_ Extra links, Random desc - value of [Root]?
Mathematics _O_ _{_ _−_ _×_ _[√]·_ _·[2]}_
Linear : 1, 2, ..., 10 Depth, Width, What is the
equation Tree-based _O :V {+ {, −, ×,_ _[√]·,} ·[2]}_ Extra links, Random desc - value of x and y?
: True, False Depth, Width, 2 What is the
Logical Bool Tree-based _O :V { :AND {True, OR, False, NOT}_ _}_ Extra links, Random descDepth, Width, _{True,3 False}_ value of [Root]?What is the
Reasoning Deductive Tree-based _O :V {AND {_ _, OR, NOT}_ _}_ Extra links, Random desc _{True, False, N/A}_ value of [Root]?
: True, False Depth, Width, 3 Given [Root] is [Value],
Abductive Tree-based _O :V {AND {_ _, OR, NOT}_ _}_ Extra links, Random desc _{True, False, N/A}_ what is the value of [Leaf i]?
: # Nodes, # max links, 2 Can [Node i] be
Reachability General _V_ : − random desc True, False reached from [Node j]?
Algorithm _O_ _−_ _{_ _}_
Max sum : 1, 2, . . ., 10 # Nodes, # max links, What is the maximum
path General _V_ _{_ _O : −_ _}_ random desc - path [Node i] to [Node j]?
a final inference, such as arithmetic problems and logical reasoning tasks. Each node in T-DAGs
represents a foundational subproblem. These subproblems are chained by the links between nodes
and finally form a complex problem. Conversely, G-DAGs excel in mapping intricate relationships,
especially in tasks demanding understanding of non-linear interactions. They are ideal for algorithmic challenges involving complex dependencies. For instance, imagine modeling a system where
a change in one entity might impact multiple others in a cascading fashion, or tasks require finding
different potential pathways between entities. The generation process for these two types of DAGs
are presented in Appendix B.1.
**Randomness in DAGs generation process. T-DAG randomness arises from operations assigned to**
the nodes and the initial values of the leaf nodes. For instance, in arithmetic, the operation can be
“+”, with the leaf nodes receiving random numbers. On the other hand, for G-DAGs, each node
is endowed with a random value (if needed for a certain problem). For every node, the number of
children is determined randomly, and the maximum number of children depends on the input. We
then establish the links by selecting the target child nodes at random.
Theorems 3.1 and 3.2 formally guarantee the dynamic generation process by exploring the probability that two samples generated by T-DAG and G-DAG are identical. We focus exclusively on
the base case, setting aside additional complexities like the integration of random links or the embedding of random descriptions, which would further diminish the likelihood of two DAGs being
identical.
**Theorem 3.1. Given a tree-based DAG with depth d and width w, if the operation set for non-**
leaf nodes has k distinct operations and the value set for leaf nodes contains n distinct values, the
probability that two independently generated DAGs are identical is: P = _k_ _w[d]w[−]−[1]1−1_ _× n[w][d][−][1]_ [][−][1] _._
**Theorem 3.2. Given a general DAG with n nodes where each node has a minimum of l** 1 links,
1 _≥_
the probability that two randomly selected DAGs are identical is bounded by (n 1)! [.]
_−_
Proofs can be found in Appendix C. These theorems guarantee that the odds of producing identical
evaluation samples are considerably low. For instance, in the arithmetic task (where k = 6, n = 10)
with d = 4 and w = 2, the chances that two DAGs are identical hover around 1e[−][15].
3.2.2 CONSTRAINTS C FOR GRAPH GENERATION
**Task constraint CT . Task constraints vary for tasks. Take the node creation for instance: 1) What**
distribution should the node value adhere to? 2) What set of operations is permissible? 3) How
should a node’s value be computed from its children’s values? In arithmetic tasks, CT includes
ensuring that a dividend is nonzero, avoiding overflow, etc. Here, we concentrate on two general
task constraints: (1) Value distribution V: Specifies the permissible range or distribution from which
leaf node values can be assigned. For example, in logic reasoning tasks, the premises (leaf nodes) are
assigned either as True or False. (2) Operation set O: Lists the operations allowed within the DAG.
The operation set constraint is usually used for tree-based DAGs. For example, in an arithmetic task,
the set of allowed operations can be defined as the basic arithmetic operations {+, −, ×, /}.
**Complexity constraint** **. We investigate 4 techniques to inject complexity into DAGs (Figure 5):**
_CG_
(1) Change width and depth for T-DAGs: The natural way to control tree complexity. (2) Change
_number of nodes and links for G-DAGs: We control the total number of nodes in G-DAGs. The_
-----
number of links in each node is randomly selected from a predefined range, e.g., [1, 5]. (3) Add
_extra random links: For each node, we may introduce an additional link to another random node._
(4) Embed random descriptions: Add random descriptions to the primary DAG’s descriptions. More
details of complexity can be found in Appendix B.2 with Figure 7 as illustrations.
3.2.3 DESCRIPTION FUNCTION F
After constructing DAGs with certain constraints, we then need to convert them into comprehensible
natural language descriptions using the description function F.
**DAG description function** **.** We describe the DAG node by node and then form the de_FG_
scription of the nodes into sequences. The interpretation of each node in natural language depends on its position and the task. For leaf nodes that represent primary input or premises, they
can be described as: “The value of [Name] is [Value].” For instance, a node denoting
number 5 could be expressed as: “The value of node A is 5.” For T-DAGs, the intermediate nodes that typically denote operations performed on their child nodes, the description can
be formulated as: “The value of [Name] is derived by [Operation] the values of
[Children’s Names].” For G-DAG, the intermediate nodes are usually described as the connections between nodes: “The [Name] points to [Children’s Names]”. Note that natural
language descriptions can be replaced according to custom needs and can be further incorporated
with textual adversarial attacks (Li et al., 2019; Gao et al., 2018; Jin et al., 2020; Li et al., 2020).
Moreover, complexity is also influenced by the order that nodes are described. We design three
orders: topological, reversed topological, and random orders, each offering a unique challenge in
understanding the DAGs. The details of these orders are presented in Appendix B.4.
**Task description function FT . The construction of F highly depends on the context of tasks. Note-**
bly, this construction is also highly flexible. For instance, incorporating adversarial prompts (Zhu
et al., 2023) to the task description can make problems more difficult. Here we present the task description function for arithmetic and reachability tasks that are representative of T-DAG and G-DAG,
respectively. Appendix B.3 presents details and examples of the remaining 5 tasks.
_Arithmetic: Given a T-DAG, the DAG description function has already demonstrated the premise:_
the leaf nodes and the intermediate steps of inference: non-leaf nodes. Next, we select the root node
as the variable required to solve, we append the question “What is the value of [Root]?” to
the description where [Root] is filled with the name of the root variable (Figure 8).
_Reachability: The reachability task aims to model if two nodes are connected in a graph. For a G-_
DAG, the DAG description function has demonstrated the connections between nodes. The task
description for reachability task is: “Can the [Node i] be reached by [Node j]” where
Node i and Node j are randomly selected from the nodes in G-DAG (Figure 9).
Finally, while it is feasible to directly adopt GPT-4 to generate a contextualized description rather
than the plain one (see Appendix B.5), it is challenging to verify the rationale of the problems
generated by GPT-4. Thus, we leave it for future work.
3.3 DYVAL COEXISTS AND CO-EVOLVES WITH EXISTING BENCHMARKS
DYVAL is complementary to existing benchmarks. First, tasks with an intrinsic structure benefit
significantly from DYVAL since they can modulate complexity and randomness by adjusting the
generation process. Efforts such as CheckList (Ribeiro et al., 2020), data augmentation (Andreas,
2020; Zhang et al., 2022), and reasoning dataset synthesis (Sinha et al., 2019; Zhao et al., 2019; Clark
et al., 2020; Tian et al., 2021; Jin et al., 2023) can be easily integrated into DYVAL. On the contrary,
tasks without a well-defined structure may present challenges for DYVAL’s implementation. Second,
DYVAL can be enhanced by existing benchmarks to formulate more challenging scenarios. For
instance, the description function F is all about natural language texts, so it can be easily combined
with adversarial attacks (Li et al., 2019; Jin et al., 2020; Zhu et al., 2023) or out-of-distribution
prompts (Yang et al., 2023a) to assess the robustness of LLMs.
Note that while this papers focuses on evaluating reasoning tasks, DYVAL is flexible to evaluate
natural language tasks. We show an initial study using DYVAL to evaluate sentiment analysis in
Appendix H and more work can be done in the future. Finally, DYVAL guarantees an unbiased and
-----
GPT4 ChatGPT Llama2-13b-chat Vicuna-13b-v1.3
Boolean LogicLinear Equation 100 Arithmetic Linear Equation 100 Boolean Logic
Deductive Logic
40
Arithmetic
50 50
Abductive Logic Accuracy Accuracy 20 Accuracy
Max Sum Path
Reachability 0 0 0
D1 D2 D3 D4 D1 D2 D3 D4 D1 D2 D3 D4
Datasets Datasets Datasets
Deductive Logic Abductive Logic Reachability Max Sum Path
100
50 50 50 20
Accuracy Accuracy Accuracy Accuracy
0 0 0 0
D1 D2 D3 D4 D1 D2 D3 D4 D1 D2 D3 D4 D1 D2 D3 D4
Datasets Datasets Datasets Datasets
Figure 2: Results on 7 tasks with complexity from D1 to D4 (averaged on 3 description orders and
3 seeds). Xwin-13B, phi-1.5, and WizardMath-13B are not shown as their results are all 0.
balanced construction of evaluation samples by nature, since one can easily control the generation
process, as shown in Appendix F.
4 EXPERIMENT
4.1 SETUP
**Tasks and complexity level. We mainly discuss the constraint used in each task. Test set accuracy**
might differ as it is generated dynamically. To balance test time and discrepancy, we produce 500
samples for each dataset. To mitigate the impact of randomness on evaluation results, we assess
each dataset three times. We define 4 complexity levels (D1∼D4) for each task. For tasks that
use general DAGs, the number of nodes is set to {7, 10, 15, 20} with each node having {3, 4, 6, 8}
maximum links and 1 minimum link. For tasks that use tree-based DAGs, tree depths and widths
are (2, 2), (3, 2), (3, 3), (4, 2), respectively. More details of D1∼D4 are presented in Appendix D.
**Evaluation metric.** Our primary evaluation metric is accuracy. For tasks where answers are
numerical, we employ relative precision (Burden et al., 2015) to determine the correctness of a
prediction, i.e., an answer is deemed correct if its relative precision is within a specified threshold, σ (e.g., 0.01%), in relation to the ground truth value. Relative precision is calculated as
_|pred −_ gt|/(gt + ϵ) ≤ _σ where gt represents the ground truth value, pred is the model’s pre-_
diction, | · | is the absolute value function, σ is the desired relative precision threshold, and ϵ is a
small value introduced to prevent division by zero.
**LLMs. Our evaluated LLMs include Flan-T5-large (Chung et al., 2022), phi-1.5 (Li et al., 2023d),**
WizardMath-13B (Luo et al., 2023), Xwin-13B (Team, 2023), Llama2-13B-chat (Touvron et al.,
2023), Vicuna-13B-v1.3 (Chiang et al., 2023), GPT-3.5-Turbo (OpenAI, 2023a), and GPT-4 (OpenAI, 2023b). Temperature is set to 0 to avoid randomness. We set the generation length to be directly
proportional to the input length. Specifically, for GPT-3.5-Turbo and GPT-4, the generate length is
set to be twice the input length; for the remaining models, it is set to be five times the input length.
We designed prompts for each task, incorporating demonstrations of rules, particularly for reasoning and algorithm tasks. To ensure formatted output, we further ask LLMs to explicitly output their
predictions between “⟨⟨⟨” and “⟩⟩⟩”. All implementations are based on Huggingface.
4.2 RESULTS FOR MATH, LOGICAL REASONING, AND ALGORITHM TASKS
Before presenting the main results, note that the results of Flan-T5-large, phi-1.5, WizardMath**13B, and Xwin-13B in all tasks are 0, so we no longer report them. We carried out experiments**
using three random seeds. Figure 2 shows the results of all tasks averaged in three generation orders
and three random seeds (full results in Appendix D.4). GPT-4 performs best, followed closely
by GPT-3.5-Turbo. Llama2-13B-chat’s performance is subpar, with Vicuna-13B-v1.3 occasionally
outperforming Llama2-13b-chat. More findings are as follows.
**Inconsistent performance between existing static benchmarks and DYVAL: Despite the excel-**
lent results of phi-1.5, Xwin-13B and WizardMath-13B on existing benchmarks, their poor performance in our evaluations highlights the potential problems when evaluating LLMs solely on static
benchmarks and possible low training data quality or data contamination issue.
-----
**Difficulty with complex datasets: Performance mostly decreases sharply from D1 to D4, high-**
lighting LLMs’ struggles with increasing complexity. For example, GPT-3.5-Turbo’s performance
drops by 23% for arithmetic task as complexity increases. Notably, performance in abductive logic
(inferring premises from conclusions) is much lower than in deductive logic (deriving conclusions
from premises), as supported by Berglund et al. (2023), which shows LLMs excel more in “A is B”
than “B is A”. In addition, the performance difference between GPT-4 and GPT-3.5-Turbo, while
subtle in simpler tasks like D1, becomes prominent in complex tasks. These observations indicate
the value of intricate and evolving tasks to effectively differentiate and evaluate models. We also
present more interesting observations in Appendix D.4.
**Human study:** We recruited 82 human evaluators
with at least a bachelor’s degree[2], to gauge their skills
against LLMs on the most complex dataset (D4) for
mathematical and logical reasoning tasks. Each participant tackled 5 problems from each dataset. As depicted
in Figure 3, both GPT-4 and GPT-3.5-Turbo consistently showed high competence in most tasks, surpass- Accuracy
ing average human results. The reason could be that the
generated problems are generally harder for humans but
easier for LLMs. Nevertheless, GPT-4 struggled in areas like linear equations and abductive logic. This indicates that future development could involve more data
Human ChatGPT Vicuna-13b-v1.3
GPT4 Llama2-13b-chat
100
80
60
Accuracy 40
20
0 ArithmeticLinear EquationBool LogicDeductive LogicAbductive Logic
Task
from specific domains.
4.3 CASE STUDY
In an endeavor to comprehensively understand the behavior of LLMs, we meticulously examined the failure modes.
Our focus is especially on the most challenging datasets for
arithmetic, deductive logic, abductive logic, and reachability
tasks based on the performance of GPT-4. We randomly selected 20 failure samples for each task and summarized the
failure modes in Figure 4. The detailed failure cases are presented in Appendix D.5. The error types vary, indicating that
there is much room for improvement.
Figure 3: Human vs. LLMs results.
Error Types
Partial
Calculation Error
Unsubstantiated
23.1 Response
33.8
Instructional 9.2
Oversight
21.5 12.3
Self
Incorrect Contradiction
Reasoning
Figure 4: Failure modes distribution.
**Partial calculation error: GPT-4 occasionally errs in inter-**
mediate steps, while keeping the remaining steps correct. We emphasize that the errors may be as
simple as 20/7 = 37.28. This aligns with (Dziri et al., 2023) noting LLMs sometimes give partially
correct multi-digit multiplication results. Incorrect reasoning and self contradiction: In reasoning tasks, GPT-4 may misinterpret rules. Given an abductive logic A ∨ _B →_ _C with C is False,_
the premise A, B must be False. However, GPT-4 inaccurately abduced that either A or B might
be False. Further, GPT-4 occasionally contradicts itself in its assumptions for the same inference
in abductive logic task. Unsubstantiated response: In reasoning tasks and algorithm tasks, GPT-4
often answers without any inferences or justifications. Its answer-only responses suggest possible
memorization or shallow understanding. Instructional oversight: Occasionally, GPT-4 adeptly arrives at the correct computation but stumbles when it comes to adhering to the output instructions
laid out in prompts, for example, the required relative precision of mathematic calculation.
4.4 ABLATION STUDY
**Impact of complexity constraints** **: In Figure 5, we vary complexity in GPT-3.5-Turbo by ad-**
_CG_
justing constraints as described in Sec. 3.2.2 and observe how LLMs performance shifts across
arithmetic, boolean logic, and deductive logic tasks. Notably, as task intricacy rises due to augmented complexity parameters, LLMs’ performance diminishes. Depth emerges as the predominant
challenge in tree-based DAGs, emphasizing the LLMs’ difficulty with extended inference steps.
**Prompt engineering: We evaluate five prompting techniques (PE) on our most challenging datasets,**
as outlined in Table 5 and Appendix D.7. No PE methods can perform best in all tasks. While APE
notably boosts the Linear Equation task by 10%, it negatively impacts deductive and abductive logic.
These varied outcomes highlight the importance of task-specific PE selection and development.
2The results may not represent the highest level of human performance. Demographics are in Appendix D.8.
-----
Depth Width Add Extra Links Embed Random Descs
Arithmetic Boolean Logic Deductive Logic
100 100
80
90
50
80
60
Accuracy Accuracy Accuracy
0 70
60 40
D1 D2 D3 D4 D5 D1 D2 D3 D4 D5 D1 D2 D3 D4 D5
Difficulty Levels Difficulty Levels Difficulty Levels
Figure 5: Comparison results across different complexity constraints.
**Influence of model size: We further evaluate the performance of Llama2 with different model sizes**
of arithmetic, boolean logic and reachability tasks on their simplest dataset D1. Table 6 shows that
larger sizes produce better results, but mostly still not surpass GPT-4 and human.
5 DYVAL HELPS FINE-TUNING
In this section, we show that DYVAL-generated data can Vanilla Fine-Tuned
further be utilized to fine-tune LLMs to improve their capabilities of solving complex tasks. Specifically, we generate training data for 7 tasks to fine-tune Llama2-13B- 30
chat. The details of fine-tuning and training sample gener- 20
ation are in Appendix E. We then test the model with dif- Accuracy
ferent settings: (1) in-distribution samples with the same 10
difficulty as the training data; (2) out-of-distribution sam- 0
ples, whose difficulty levels are higher than the training GSM8KSVAMP FOLIO RACO DP LCS
data. To further demonstrate the effectiveness of our gen- Task
erated data, we test the models with few-shot examples on
Vanilla Fine-Tuned
40
30
20
Accuracy
10
0
GSM8KSVAMP FOLIO RACO DP LCS
Task
Figure 6: Results on existing bench
**existing benchmarks including GSM8K (Cobbe et al.,**
marks using Llama2-13B-chat model
2021) and SVAMP (Patel et al., 2021) to evaluate math
fine-tuned on DYVAL-generated data.
abilities, FOLIO (Han et al., 2022) and RACO (bench authors, 2023) to evaluate the logical reasoning abilities, and DP (Dziri et al., 2023) and LCS (bench
authors, 2023) to evaluate the algorithm abilities. Results in Figure 6 and 10 show that the performance of the fine-tuned model increases in all tasks. It shows that DYVAL is effective not only as a
benchmark but also in enhancing the performance of LLMs on existing benchmarks via fine-tuning
on its generated samples. The improvement might stem from the similarities between various benchmarks and DYVAL-generated samples. For instance, GSM8K samples can be interpreted as trees of
depth 2 or 3. Interestingly, even no dynamic programming tasks in our fine-tuning, the fine-tuned
model also showed improved performance on the DP and LCS datasets. This underscores the potential learning capability of LLMs and the efficacy of training samples generated by DYVAL. We
further fine-tuned GPT-3.5-Turbo and examined its ability on general natural language understanding. The results indicated that fine-tuning on our generated datasets does not necessarily hurt the
natural language understanding ability, as comprehensively discussed in Appendix G.
6 CONCLUSION AND DISCUSSION
We proposed DYVAL, a dynamic LLMs evaluation protocol to mitigate the data contamination and
static complexity of existing benchmarks. We designed the graph-informed DYVAL for reasoning
tasks. The strength of DYVAL lies in its dynamic generation of samples, with inherent flexibility for
difficulty adjustment. We observed several interesting findings in experiments using our benchmark.
More importantly, DYVAL-generated samples can not only be used as evaluation samples, but also
act as fine-tuning data for LLMs to enhance their performance in existing benchmarks.
Our work has several limitations. (1) Tasks: We currently focused on reasoning tasks. While DYVAL
supports other tasks (see Sec. H), it requires design of the generation algorithm G. (2) Samples: Our
experiments utilized a limited set of test samples due to resource constraints. Evaluations on larger
sets may help to observe more findings. (3) Fine-tuning: Fine-tuning can be done on more diverse
models and datasets to gain deeper insights.
-----
ACKNOWLEDGEMENT AND DISCLAIMER
The purpose of this research is to present a dynamic and evolving evaluation protocol in response to
the rapid development of LLMs. We have the following claims. First, the generation mechanism of
DYVAL does not contain any potentially harmful words or expressions but only mathematical, logical, and algorithmic descriptions. In the future, the usage of DYVAL on other natural language tasks
should be dealt with cautions to not include any harmful or irresponsible languages. Second, human
subjects are involved in this study to act as LLMs’ competitors for performance comparison and
analysis. All human studies are conducted obeying laws and regulations in certain countries. Third,
the experiments on GPT-3.5-Turbo and GPT-4 conducted in this paper are based on their latest version in June, 2023. Authors recommend using the same version of these services for reproducibility.
As we tried our best to tune the best prompts for our experiments, it is, however, well-known that
LLMs are highly sensitive to prompts. Therefore, the experiments in this paper are only based on
our prompt design and codebase. Finally, we may have concluded that some LLMs in this paper
achieved poor performance in our benchmark, but this does not mean these models are not good or
cannot be used in practice. Authors remain positive and optimistic to all evaluated LLMs that they
will further be stronger.
REFERENCES
Aho Alfred V, Lam Monica S, Sethi Ravi, Ullman Jeffrey D, et al. Compilers-principles, techniques,
_and tools. pearson Education, 2007._
Jacob Andreas. Good-enough compositional data augmentation. In Proceedings of the 58th Annual
_Meeting of the Association for Computational Linguistics, pp. 7556–7566, Online, July 2020._
Association for Computational Linguistics.
Devansh Arpit, Stanislaw Jastrzkebski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S. Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, and Simon
Lacoste-Julien. A closer look at memorization in deep networks. In Proceedings of the 34th
_International Conference on Machine Learning, volume 70, pp. 233–242, 06–11 Aug 2017._
Yejin Bang, Samuel Cahyawijaya, Nayeon Lee, Wenliang Dai, Dan Su, Bryan Wilie, Holy Lovenia,
Ziwei Ji, Tiezheng Yu, Willy Chung, et al. A multitask, multilingual, multimodal evaluation of
chatgpt on reasoning, hallucination, and interactivity. arXiv preprint arXiv:2302.04023, 2023.
BIG bench authors. Beyond the imitation game: Quantifying and extrapolating the capabilities of
language models. Transactions on Machine Learning Research, 2023. ISSN 2835-8856.
Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. On the
dangers of stochastic parrots: Can language models be too big? FAccT 2021, pp. 610–623, New
York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383097.
Lukas Berglund, Meg Tong, Max Kaufmann, Mikita Balesni, Asa Cooper Stickland, Tomasz Korbak, and Owain Evans. The reversal curse: Llms trained on “a is b” fail to learn “b is a”. arXiv
_preprint arXiv:2309.12288, 2023._
Stella Biderman, USVSN Sai Prashanth, Lintang Sutawika, Hailey Schoelkopf, Quentin Anthony,
Shivanshu Purohit, and Edward Raf. Emergent and predictable memorization in large language
models. arXiv preprint arXiv:2304.11158, 2023.
Nathan Brody. What is intelligence? International Review of Psychiatry, 11(1):19–25, 1999.
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are
few-shot learners. Advances in neural information processing systems, 33:1877–1901, 2020.
S´ebastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general
intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023.
-----
Richard L Burden, J Douglas Faires, and Annette M Burden. Numerical analysis. Cengage learning,
2015.
Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan
Zhang. Quantifying memorization across neural language models. In The Eleventh International
_Conference on Learning Representations, 2023._
Jiaao Chen, Xiaoman Pan, Dian Yu, Kaiqiang Song, Xiaoyang Wang, Dong Yu, and Jianshu
Chen. Skills-in-context prompting: Unlocking compositionality in large language models. arXiv
_preprint arXiv:2308.00304, 2023._
Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng,
Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot
impressing gpt-4 with 90%* chatgpt quality. See https://vicuna. lmsys. org (accessed 14 April
_2023), 2023._
Minje Choi, Jiaxin Pei, Sagar Kumar, Chang Shu, and David Jurgens. Do llms understand social
knowledge? evaluating the sociability of large language models with socket benchmark. arXiv
_preprint arXiv:2305.14938, 2023._
Raunak Chowdhuri, Neil Deshmukh, and David Koplow. No, gpt4
can’t ace mit. [https://flower-nutria-41d.notion.site/](https://flower-nutria-41d.notion.site/No-GPT4-can-t-ace-MIT-b27e6796ab5a48368127a98216c76864)
[No-GPT4-can-t-ace-MIT-b27e6796ab5a48368127a98216c76864, 2023.](https://flower-nutria-41d.notion.site/No-GPT4-can-t-ace-MIT-b27e6796ab5a48368127a98216c76864)
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi
Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022.
Peter Clark, Oyvind Tafjord, and Kyle Richardson. Transformers as soft reasoners over language.
In Christian Bessiere (ed.), Proceedings of the Twenty-Ninth International Joint Conference on
_Artificial Intelligence, IJCAI-20, pp. 3882–3890. International Joint Conferences on Artificial_
Intelligence Organization, 7 2020.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to
solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
George Cybenko. Approximation by superpositions of a sigmoidal function. Mathematics of control,
_signals and systems, 2(4):303–314, 1989._
Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, Peter West,
Chandra Bhagavatula, Ronan Le Bras, Jena D Hwang, et al. Faith and fate: Limits of transformers
on compositionality. arXiv preprint arXiv:2305.18654, 2023.
Irena Gao, Gabriel Ilharco, Scott Lundberg, and Marco Tulio Ribeiro. Adaptive testing of computer
vision models. arXiv preprint arXiv:2212.02774, 2022.
J. Gao, J. Lanchantin, M. L. Soffa, and Y. Qi. Black-box generation of adversarial text sequences to
evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW), pp. 50–56,
May 2018. doi: 10.1109/SPW.2018.00016.
Shahriar Golchin and Mihai Surdeanu. Data contamination quiz: A tool to detect and estimate
contamination in large language models. arXiv preprint arXiv:2311.06233, 2023a.
Shahriar Golchin and Mihai Surdeanu. Time travel in llms: Tracing data contamination in large
language models. arXiv preprint arXiv:2308.08493, 2023b.
Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy
Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, et al. Folio: Natural language reasoning
with first-order logic. arXiv preprint arXiv:2209.00840, 2022.
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob
Steinhardt. Measuring massive multitask language understanding. In International Conference
_on Learning Representations, 2021._
-----
Edward J Hu, yelong shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang,
and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Con_ference on Learning Representations, 2022._
Yuzhen Huang, Yuzhuo Bai, Zhihao Zhu, Junlei Zhang, Jinghan Zhang, Tangjun Su, Junteng Liu,
Chuancheng Lv, Yikai Zhang, Jiayi Lei, et al. C-eval: A multi-level multi-discipline chinese
evaluation suite for foundation models. arXiv preprint arXiv:2305.08322, 2023.
[HuggingFace. Open-source large language models leaderboard. https://huggingface.co/](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
[spaces/HuggingFaceH4/open_llm_leaderboard, 2023.](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. Is bert really robust? a strong baseline
for natural language attack on text classification and entailment. In Proceedings of the AAAI
_conference on artificial intelligence, volume 34, pp. 8018–8025, 2020._
Zhijing Jin, Jiarui Liu, Zhiheng Lyu, Spencer Poff, Mrinmaya Sachan, Rada Mihalcea, Mona Diab,
and Bernhard Sch¨olkopf. Can large language models infer causation from correlation? _arXiv_
_preprint arXiv:2306.05836, 2023._
Tushar Khot, Harsh Trivedi, Matthew Finlayson, Yao Fu, Kyle Richardson, Peter Clark, and Ashish
Sabharwal. Decomposed prompting: A modular approach for solving complex tasks. In The
_Eleventh International Conference on Learning Representations, 2022._
Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie
Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian
Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina
Williams. Dynabench: Rethinking benchmarking in NLP. In Proceedings of the 2021 Confer_ence of the North American Chapter of the Association for Computational Linguistics: Human_
_Language Technologies, pp. 4110–4124, June 2021._
Dan Klein and Christopher D Manning. Accurate unlexicalized parsing. In Proceedings of the 41st
_annual meeting of the association for computational linguistics, pp. 423–430, 2003._
Jan Koco´n, Igor Cichecki, Oliwier Kaszyca, Mateusz Kochanek, Dominika Szydło, Joanna Baran,
Julita Bielaniewicz, Marcin Gruza, Arkadiusz Janz, Kamil Kanclerz, et al. Chatgpt: Jack of all
trades, master of none. Information Fusion, pp. 101861, 2023.
Ben Krause, Emmanuel Kahembwe, Iain Murray, and Steve Renals. Dynamic evaluation of neural
sequence models. In International Conference on Machine Learning, pp. 2766–2775. PMLR,
2018.
Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. TextBugger: Generating adversarial text
against real-world applications. In Proceedings 2019 Network and Distributed System Security
_Symposium. Internet Society, 2019. doi: 10.14722/ndss.2019.23138._
Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. BERT-ATTACK: Adversarial attack against BERT using BERT. In Proceedings of the 2020 Conference on Empirical
_Methods in Natural Language Processing (EMNLP), pp. 6193–6202, November 2020._
Minghao Li, Feifan Song, Bowen Yu, Haiyang Yu, Zhoujun Li, Fei Huang, and Yongbin Li. Apibank: A benchmark for tool-augmented llms. arXiv preprint arXiv:2304.08244, 2023a.
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy
Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following
[models. https://github.com/tatsu-lab/alpaca_eval, 2023b.](https://github.com/tatsu-lab/alpaca_eval)
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy
Liang, and Tatsunori B Hashimoto. Alpacaeval: An automatic evaluator of instruction-following
models, 2023c.
Yuanzhi Li, S´ebastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar, and Yin Tat Lee.
Textbooks are all you need ii: phi-1.5 technical report. arXiv preprint arXiv:2309.05463, 2023d.
-----
Yucheng Li. An open source data contamination report for llama series models. arXiv preprint
_arXiv:2310.17589, 2023._
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga,
Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan,
Bobby Yan, Ce Zhang, Christian Alexander Cosgrove, Christopher D Manning, Christopher Re,
Diana Acosta-Navas, Drew Arad Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda
Rong, Hongyu Ren, Huaxiu Yao, Jue WANG, Keshav Santhanam, Laurel Orr, Lucia Zheng,
Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri S. Chatterji, Omar Khattab,
Peter Henderson, Qian Huang, Ryan Andrew Chi, Sang Michael Xie, Shibani Santurkar, Surya
Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang,
Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. Holistic evaluation of language models.
_Transactions on Machine Learning Research, 2023. ISSN 2835-8856._
David F Lohman and Joni M Lakin. Intelligence and reasoning. The Cambridge handbook of
_intelligence, pp. 419–441, 2011._
Brian Lovin. Gpt-4 performs significantly worse on coding problems not in its training data.
[https://brianlovin.com/hn/35297067, 2023.](https://brianlovin.com/hn/35297067)
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, and Dongmei Zhang. Wizardmath: Empowering mathematical reasoning
for large language models via reinforced evol-instruct. arXiv preprint arXiv:2308.09583, 2023.
Zhiyi Ma, Kawin Ethayarajh, Tristan Thrush, Somya Jain, Ledell Wu, Robin Jia, Christopher Potts,
Adina Williams, and Douwe Kiela. Dynaboard: An evaluation-as-a-service platform for holistic
next-generation benchmarking. Advances in Neural Information Processing Systems, 34:10351–
10367, 2021.
Inbal Magar and Roy Schwartz. Data contamination: From memorization to exploitation. arXiv
_preprint arXiv:2203.08242, 2022._
[OpenAI. https://chat.openai.com.chat, 2023a.](https://chat.openai.com.chat)
OpenAI. Gpt-4 technical report, 2023b.
Yonatan Oren, Nicole Meister, Niladri Chatterji, Faisal Ladhak, and Tatsunori B Hashimoto. Proving
test set contamination in black box language models. arXiv preprint arXiv:2310.17623, 2023.
John Palowitch, Anton Tsitsulin, Brandon Mayer, and Bryan Perozzi. Graphworld: Fake graphs
bring real insights for gnns. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge
_Discovery and Data Mining, pp. 3691–3701, 2022._
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve simple
math word problems? In Proceedings of the 2021 Conference of the North American Chapter of
_the Association for Computational Linguistics: Human Language Technologies, pp. 2080–2094,_
June 2021.
Marco Tulio Ribeiro and Scott Lundberg. Adaptive testing and debugging of nlp models. In Pro_ceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1:_
_Long Papers), pp. 3253–3267, 2022._
Marco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. Beyond accuracy: Behavioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of
_the Association for Computational Linguistics, pp. 4902–4912, Online, July 2020. Association_
for Computational Linguistics.
Subhro Roy and Dan Roth. Solving general arithmetic word problems. In Proceedings of the 2015
_Conference on Empirical Methods in Natural Language Processing, pp. 1743–1752, September_
2015.
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander
Kranias, John J Nay, Kshitij Gupta, and Aran Komatsuzaki. Arb: Advanced reasoning benchmark
for large language models. arXiv preprint arXiv:2307.13692, 2023.
-----
Rylan Schaeffer. Pretraining on the test set is all you need. arXiv preprint arXiv:2309.08632, 2023.
Koustuv Sinha, Shagun Sodhani, Jin Dong, Joelle Pineau, and William L. Hamilton. CLUTRR: A
diagnostic benchmark for inductive reasoning from text. In Proceedings of the 2019 Conference
_on Empirical Methods in Natural Language Processing and the 9th International Joint Confer-_
_ence on Natural Language Processing (EMNLP-IJCNLP), pp. 4506–4515, Hong Kong, China,_
November 2019. Association for Computational Linguistics.
Xiaojuan Tang, Zilong Zheng, Jiaqi Li, Fanxu Meng, Song-Chun Zhu, Yitao Liang, and Muhan
Zhang. Large language models are in-context semantic reasoners rather than symbolic reasoners.
_arXiv preprint arXiv:2305.14825, 2023._
Michael T¨anzer, Sebastian Ruder, and Marek Rei. Memorisation versus generalisation in pre-trained
language models. In Proceedings of the 60th Annual Meeting of the Association for Computa_tional Linguistics (Volume 1: Long Papers), pp. 7564–7578, May 2022._
[Xwin-LM Team. Xwin-lm, 9 2023. URL https://github.com/Xwin-LM/Xwin-LM.](https://github.com/Xwin-LM/Xwin-LM)
Krishnaiyan Thulasiraman and Madisetti NS Swamy. Graphs: theory and algorithms. John Wiley
& Sons, 2011.
Jidong Tian, Yitian Li, Wenqing Chen, Liqiang Xiao, Hao He, and Yaohui Jin. Diagnosing the
first-order logical reasoning ability through LogicNLI. In Proceedings of the 2021 Conference on
_Empirical Methods in Natural Language Processing, pp. 3738–3747, November 2021._
Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. Deeptest: Automated testing of deepneural-network-driven autonomous cars. In Proceedings of the 40th international conference on
_software engineering, pp. 303–314, 2018._
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
Oriol Vinyals, Łukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. Grammar as a foreign language. Advances in neural information processing systems, 28, 2015.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi,
Quoc V Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language
models. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho (eds.), Advances
_in Neural Information Processing Systems, 2022._
Tianwen Wei, Liang Zhao, Lichang Zhang, Bo Zhu, Lijie Wang, Haihua Yang, Biye Li, Cheng
Cheng, Weiwei L¨u, Rui Hu, et al. Skywork: A more open bilingual foundation model. arXiv
_preprint arXiv:2310.19341, 2023._
Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Aky¨urek, Boyuan Chen, Bailin Wang, Najoung Kim,
Jacob Andreas, and Yoon Kim. Reasoning or reciting? exploring the capabilities and limitations
of language models through counterfactual tasks. arXiv preprint arXiv:2307.02477, 2023.
Linyi Yang, Shuibai Zhang, Libo Qin, Yafu Li, Yidong Wang, Hanmeng Liu, Jindong Wang, Xing
Xie, and Yue Zhang. Glue-x: Evaluating natural language understanding models from an out-ofdistribution generalization perspective. In Findings of ACL, 2023a.
Shuo Yang, Wei-Lin Chiang, Lianmin Zheng, Joseph E Gonzalez, and Ion Stoica. Rethinking
benchmark and contamination for language models with rephrased samples. _arXiv preprint_
_arXiv:2311.04850, 2023b._
Matej Zeˇcevi´c, Moritz Willig, Devendra Singh Dhami, and Kristian Kersting. Causal parrots: Large
language models may talk causality but are not causal. Transactions on Machine Learning Re_search, 2023._
Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding
deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–
115, 2021.
-----
Le Zhang, Zichao Yang, and Diyi Yang. TreeMix: Compositional constituency-based data augmentation for natural language understanding. In Proceedings of the 2022 Conference of the
_North American Chapter of the Association for Computational Linguistics: Human Language_
_Technologies, pp. 5243–5258, July 2022._
Yifan Zhang, Jingqin Yang, Yang Yuan, and Andrew Chi-Chih Yao. Cumulative reasoning with
large language models. arXiv preprint arXiv:2308.04371, 2023.
Jie Zhao, Xiang Deng, and Huan Sun. Easy-to-hard: Leveraging simple questions for complex
question generation, 2019.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang,
Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, et al. Judging llm-as-a-judge with mt-bench and
chatbot arena. arXiv preprint arXiv:2306.05685, 2023.
Wanjun Zhong, Ruixiang Cui, Yiduo Guo, Yaobo Liang, Shuai Lu, Yanlin Wang, Amin Saied,
Weizhu Chen, and Nan Duan. Agieval: A human-centric benchmark for evaluating foundation
models. arXiv preprint arXiv:2304.06364, 2023.
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat,
Ping Yu, Lili Yu, Susan Zhang, Gargi Ghosh, Mike Lewis, Luke Zettlemoyer, and Omer Levy.
Lima: Less is more for alignment, 2023a.
Denny Zhou, Nathanael Sch¨arli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, et al. Least-to-most prompting enables complex
reasoning in large language models. In ICLR, 2023b.
Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin,
Ji-Rong Wen, and Jiawei Han. Don’t make your llm an evaluation benchmark cheater. arXiv
_preprint arXiv:2311.01964, 2023c._
Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, and
Jimmy Ba. Large language models are human-level prompt engineers. In ICLR, 2023d.
Kaijie Zhu, Jindong Wang, Jiaheng Zhou, Zichen Wang, Hao Chen, Yidong Wang, Linyi Yang, Wei
Ye, Neil Zhenqiang Gong, Yue Zhang, et al. Promptbench: Towards evaluating the robustness of
large language models on adversarial prompts. arXiv preprint arXiv:2306.04528, 2023.
Zeyuan Allen Zhu and Yuanzhi Li. Physics of language models: Part 3.1, knowledge storage and
extraction, 2023.
Caleb Ziems, William Held, Omar Shaikh, Jiaao Chen, Zhehao Zhang, and Diyi Yang. Can large
language models transform computational social science? _arXiv preprint arXiv:2305.03514,_
2023.
-----
CONTENTS
**A Preliminary on Directed Acyclic Graph** **16**
**B** **Details of DYVAL** **17**
B.1 Generation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B.2 Complexity control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B.3 Description function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
B.4 Description order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
B.5 Potentials in Using GPT-4 to Generate Description Functions . . . . . . . . . . . . 22
**C Proof** **23**
**D Details of Experiments** **23**
D.1 Experiment Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
D.2 Prompts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
D.3 Evaluation Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
D.4 Details of Experiment Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
D.5 Details of Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
D.6 Details of Varing Complexity Constraints . . . . . . . . . . . . . . . . . . . . . . 30
D.7 Details of Prompt Engineering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
D.8 Human Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
**E** **Details of Fine-tuning** **32**
E.1 Constructing training data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
E.2 Training data and testing data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
E.3 Results of Fine-tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
**F** **Imbalanced Generated Dataset** **36**
**G General Language Understanding Ability after Fine-tuning** **37**
**H Flexibility to Natural Language Tasks** **37**
A PRELIMINARY ON DIRECTED ACYCLIC GRAPH
Directed Acyclic Graphs, commonly referred to as DAGs, are a category of graphs that encapsulate
a unique structure: they are directed and contain no cycles. In a DAG, vertices are connected by
directed links, and there exists no sequence of links that loops back to an original node. Every link
in a DAG has an initial node and a terminal node, giving it a direction. This is often symbolized
as a → _b, where a is the starting node and b is the ending node. A key property that differentiates_
DAGs from other directed graphs is their lack of cycles. In other words, starting from any node in
the graph, one cannot traverse a series of links and return to the same node.
In our implementation, each node comprises three attributes: 1) Children (Links): These are the
direct dependents or subsequent nodes that a given node connects to. They highlight the immediate
-----
relations or following a particular node. 2) Value: Every node possesses a value, which can either be
explicitly assigned or derived based on its operation and its children. This value captures the essence
or result of the represented subproblem. 3) Operation: Especially relevant in tree-based DAGs, the
operation dictates how a node interprets or processes the values of its children to compute its own
value. Operations might include mathematical functions, logical evaluations.
B DETAILS OF DYVAL
B.1 GENERATION ALGORITHM
We distinguish DAGs into two primary categories: Tree-based DAGs (T-DAGs) and General DAGs
(G-DAGs), as shown in Figure 1.
B.1.1 T-DAGS
Tree-based DAGs possess an innate hierarchical structure that frequently encapsulate tasks that entail a sequence of conditions culminating in a definitive conclusion or result. This hierarchy naturally aligns with the structure of many mathematical and logical problems. For instance, in solving
a multi-step algebraic problem, one often starts with the provided equations (leaf nodes) and proceeds step-by-step, combining and reducing these equations until arriving at the final solution (root
node). Such a natural progression of deduction makes tree-based DAGs particularly feasible for
these problems.
We employ a top-down approach to construct a Tree-based DAG. This algorithm is tailored to produce a tree with a designated depth and width. The inherent randomness stems from two main
factors: the operations assigned to intermediate nodes and the initialization values of the leaf nodes.
For the intermediate nodes, we commence by randomly selecting an operation that defines the relationship between the node and its children. Take an arithmetic task as an example: selecting
‘addition (+)’ implies that the node’s value is the sum of its children’s values. Once all child nodes
are established, we compute the parent node’s value accordingly. In the case of leaf nodes, values
are assigned randomly, such as picking an integer from the range [1, 10] for arithmetic tasks.
B.1.2 G-DAGS
General DAGs, diverging from tree-based ones, lack a strict hierarchy. Instead, they present a more
intricate web of node relationships. Their strength lies in simulating complex, intertwined relations
in real-world situations. A classic use-case is the representation of transportation systems where
nodes symbolize cities and edges represent connecting roads. Challenges such as determining if one
city is accessible from another encapsulate the real-world problems general DAGs adeptly model.
Their flexibility extends to representing a plethora of situations, from mapping supply-chain logistics
to analyzing social networks.
To create a general DAG, we initiate by generating isolated nodes without any connecting links.
Subsequently, each node is endowed with a random value. For every node, the number of children is
determined randomly, the maximum number of children is depended on the input. We then establish
the links by selecting target child nodes at random.
B.2 COMPLEXITY CONTROL
Figure 7 demonstrated 4 types of complexity constraints for T-DAGs. Compared to original case,
adding width and additional links augments the computational intricacy of each subproblem. Increasing the depth escalates the complexity by necessitating more inference steps. Embedding random descriptions aims to distract LLMs.
B.3 DESCRIPTION FUNCTION
Figure 9 presented an illustration of our generated 7 tasks in 3 subjects: (1) Mathematics (DYVALM), which includes arithmetic task and linera equation task; (2) Logical Reasoning (DYVAL-L),
-----
**Origin** **Add width** **Add depth** **Add random links** **Embed random descriptions**
**A**
**B** **C**
**D** **E** **F**
**A**
**B** **C** **H**
**D** **E** **F** **G** **I** **J** **K**
**A**
**B** **C**
**D** **E** **F**
**A**
**B** **C**
**D** **E** **F**
**A**
**B** **C**
**D** **E** **F**
**G** **H** **I** **J** **K**
**L**
**M N**
**A’s value is calculated by**
**the sum of B and C.**
**M’s value is…**
**B’s value is …**
**…**
**L’s value is…**
**A’s value is calculated**
**by the sum of B and C.**
**B’s value is …**
|Col1|G I J K H|Col3|G I J K H|
|---|---|---|---|
|A’s value is calculated by the sum of B and C and E. B’s value is … …|A’s value is calculated by the sum of B and C. B’s value is … … G’s value is … …||A’s value is calculated by the sum of B and C and E. B’s value is … …|
|||||
Figure 7: The complexity constraints for Tree-based DAGs.
which includes boolean logic task, deductive logic task, and abductive logic task; (3) Algorithm
Tasks (DYVAL-A), which includes reachability task and max sum path task.
B.3.1 DYVAL-M
For DYVAL-M, we design mathematical problems that can be categorized into two main types:
**Arithmetic:** Given a T-DAG, the DAG description function has already demonstrated the premise:
the leaf nodes and the intermediate steps of inference: non-leaf nodes. Next, we select the root node
as the the variable required to solve, we append the question “What is the value of [Root]?”
to the final of the description where [Root] is filled with the name of the root variable.
Here is a description of an arithmetic problem:
The value of aaa is 9.
The value of aad is 4.
aae gets its value by taking the square root of the value that aad
has.
The value of aab is 3.
aac gets its value by adding together the value of aaa and aab.
aaf gets its value by subtracting the value of aae from the value of
aac.
Compute the result of aaf. If the solution cannot be calculated,
answer ’N/A’. Ensure your result is within a relative precision of
0.0001 (or 0.01%) compared to the ground truth value. Ensure your
final result begins with ’<<<’ and ends with ’>>>’, for example, if
the answer is 1, your final result should be <<<1>>>.
**Linear Equations:** Linear equations with multiple variables present a higher degree of complexity
compared to arithmetic. We use two-variable linear equations described as a1x + b1y = c1, a2x +
_b2y = c2. The coefficients are assigned a random value. We ask LLMs to solve the value of x, y for_
this linear system. Note that constructing such linear equations does not need T-DAGs or G-DAGs.
To introduce additional challenges, some coefficients can be substituted with values derived from
the T-DAG’s roots, forcing a two-step problem-solving approach: first calculating the coefficients
from the DAG and subsequently resolving the linear equations. Note that in our experiment, the tree
depth and width for linear equation task are (1, 1), (2, 2), (3, 2), (4, 2) respectively. (1, 1) represent
that the value of the replaced coefficient is directly given.
-----
Given the following linear equation system with two variables:
-7 x + aac0 y = 1
8 x + -1 y = 10
The calculation of aac0 is defined as:
The value of aab0 is 4.
The value of aaa0 is 9.
aac0 gets its value by adding together the value of aaa0 and aab0.
Determine the values of x and y. Ensure your results are within a
relative precision of 0.001 (or 0.1%) compared to the ground truth
values. Your response should be formatted as: <<<x’s value y’s
value>>>, e.g., if x=1 and y=2, then it should be <<<1 2>>>
B.3.2 DYVAL-L
DYVAL-L also shares a natural compatibility with the structured representation of T-DAGs due to
the innate progression and dependencies inherent in logical constructs. The tasks are:
**Boolean Logic:** Similar to arithmetic task, it primarily revolves around the manipulation and combination of True and False values using operators: AND, OR, NOT. The problems are presented
as: What is the truth value of [Root]?.
Here is a description of a boolean logic problem:
aaa is True.
The value of aab equals to (NOT aaa).
The value of aac equals to (NOT aab).
Compute the result of aac. If the solution can not be calculated,
answer ’N/A’. Ensure your final result begins with ’<<<’ and ends with
’>>>’, for example, if the answer is True, your final result should be
<<<True>>>.
**Deductive Logic:** The process of deductive logic is similar to boolean logic, but deduction introduces a bit complexity compared to boolean logic inference. For instance, given premises A (True)
and B (False), and the relationship (A ∧ _B) →_ _C, the value of conclusion C remains undeter-_
mined because the conjunction (A ∧ _B) is false. Given the description of T-DAGs, the problem is_
formulated as By the rule of deduction, what is the value of [Root]?
Here is a description of a deductive logic problem:
aab is True.
aaa is True.
(aaa and aab) -> aac.
aad is False.
(NOT aad) -> aae.
(aac or aae) -> aaf.
The symbol ’->’ represents a deductive relationship, e.g., A -> B
implies that if A is true, then B is true. If A is false, B’s truth
value remains undetermined (N/A). Deduct the result of aaf. If the
-----
solution can not be abducted, answer ’N/A’. Ensure your final result
begins with ’<<<’ and ends with ’>>>’, for example, if the answer is
True, your final result should be <<<True>>>.
**Abductive Logic:** It aims to hypothesize the most likely cause or explanation based on observed
outcomes. When working with a T-DAG, we assign a random value to the root node. Then, we
randomly select a leaf node, the problem is to determine the leaf node’ value based on the given
the DAG structure and root’s value. The task description is Given the value of [Root] is
[value], what is the value of [Node]?
Here is a description of an abductive logic problem:
(aaa or aab) -> aac.
(NOT aac) -> aad.
Given aad is False, what is the value of aab?
The symbol ’->’ represents a deductive relationship, e.g., A -> B
implies that if B is false, then A is false. If B is true, A’s truth
value remains undetermined (N/A). If the solution can not be deducted,
answer ’N/A’. Ensure your final result begins with ’<<<’ and ends with
’>>>’, for example, if the answer is True, your final result should be
<<<True>>>.
B.3.3 DYVAL-A
DYVAL-A tasks is suitable for D-DAG since they aim to model the real-world applications. Among
many problems that can be abstracted and modeled as a G-DAG, here we select two representative
tasks.
**Reachability:** A classic example of where G-DAGs shine is in modeling problems like the reachability of two nodes in the DAG. Given various nodes representing cities and links indicating roads
between them, the question models can help deduce the if there exists a route from one city to another. Thus, the description for this task is: “Can the [Node1] be reached by [Node2]” where Node1
and Node2 are randomly selected from the nodes in G-DAG.
Given a directed graph:
aai points to: (None).
aac points to: (aai).
aaj points to: (aai).
aah points to: (aai, aac, aaj).
aag points to: (aac).
aaf points to: (aag, aah, aaj).
aab points to: (aaf, aah).
aaa points to: (aag, aah, aaf, aaj).
aae points to: (aai, aac, aaa).
aad points to: (aab, aaf, aae).
Can aaf be reached starting from aag?
Respond with either ’<<<True>>>’ if reachable, or ’<<<False>>>’
otherwise.
-----
**Max sum path:** Compared to reachiability problem, max sum path is more complex. This problem
assign a value for each city, and then requires to find a path from two cities that the sum of the values
the path go through is maximum. It requires the LLMs to find all the path between two nodes, and
then determine the path with maximum value. The description for this task is What is the max
sum path from [Node 1] to [Node 2]?
Given a directed graph with values assigned to each node:
aaj points to: (None).
aah points to: (None).
aai points to: (aah).
aag points to: (aai).
aac points to: (aag).
aab points to: (aac, aag).
aaf points to: (aai).
aae points to: (aac, aah).
aad points to: (aag, aae, aaj).
aaa points to: (aae, aai, aaj, aad).
The value of aaj is 9
The value of aab is 8
The value of aah is 3
The value of aaf is 3
The value of aai is 3
The value of aae is 3
The value of aad is 6
The value of aac is 4
The value of aag is 8
The value of aaa is 4
What’s the maximum sum path from aaa to aae? For exmaple, the value
of the path A->B->C is obtained by summing the values of nodes A, B,
and C. Please format your response as <<<Answer>>>. For example, if
the answer is 1, it should be presented as <<<1>>>.
B.4 DESCRIPTION ORDER
- Topological Order: This approach sequences the description of nodes in a manner where every
node is introduced after all its descendent nodes. Such a sequence ensures that leaf nodes are
laid out prior to any operation that utilizes them (e.g., an addition or a logical AND).
- Reversed Topological Order: Taking an almost counter-intuitive approach, this order starts by
spotlighting the culminating nodes or outcomes. Once these results are laid bare, the narrative
retraces its steps, navigating backwards to the root nodes or primary inputs.
- Random Order: This unstructured method presents nodes in a random sequence, irrespective of their dependencies within the DAG. Such a disordered narrative challenges LLMs to
independently connect the dots and derive patterns.
-----
Arithmetic Linear Equation
What is the value of F? Here is the lineareq with two variable,
where G is calculated by…
F GX
D E E+ F- 4x+4y=G2x+3y=5
A B C A B C D
2 3 7 3 2 8 4
Boolean Logic Deductive Logic Abductive Logic
What is the value of G? What is the value of F? What is the value of F?
G F F
𝓕𝓖 𝓕𝑻
**F gets its value by** **Topological Order**
**dividing D from E**
**A is ..., B is …, C is …**
**D is ..., E is …,**
**F** **F is ...**
÷
**Reversed Topological Order**
**F is ...**
**D** **E** **D is ..., E is …,**
**+** √ **A is ..., B is …, C is …**
**A** **B** **C** **Random Order**
**1** **4** **7** **A is ..., E is …, F is ...,**
**D is ..., B is …,**
**C is …**
**B’s value is 4.**
Reachability Max Sum Path
Can G be reached by A? What is the maximum sum path
from A to F?
A B **A8** B
C D C D
E F G H E F G
Figure 9: Descriptions of 7 tasks.
Figure 8: Description function of arithmetic task.
B.5 POTENTIALS IN USING GPT-4 TO GENERATE DESCRIPTION FUNCTIONS
In this section, we explore the potentials in using GPT-4 to generate more natural and contextualized
problem descriptions rather than the plain ones in Section B.3.
We simply feed the following instruction to GPT-4:
“Here is a description of an arithmetic problem, please generate an
application math test using the following information. Do not change
the exact numbers. Note that the test should be contextualized, real
applications test, instead of the plain description.” followed by the DYVALgenerated problems such as the one in DYVAL-M.
Then, GPT-4 will generate the following application problem which is more vivid:
Problem:
Imagine you are managing a small warehouse. The warehouse has a total
storage capacity of 9 large crates (represented by ’aaa’). You also
have an additional area that can store 3 more crates (represented by
’aab’).
Recently, you purchased a set of 4 special crates (represented by
’aad’) that are stackable. When stacked, these crates take up the
space of 2 regular crates.
Your task is to calculate the new total storage capacity of your
warehouse after adding the stackable crates and utilizing the
additional storage area.
Steps:
Identify the total initial capacity (aaa). Identify the additional
capacity (aab). Calculate the effective space taken by the stackable
crates (aae) - consider them as a single unit after stacking.
Calculate the new total capacity by adding the initial and additional
capacities (aac). Finally, adjust the total capacity by considering
the space taken by the stackable crates (aaf). Question:
What is the new total storage capacity of the warehouse after these
adjustments?
(Use the given values and operations to solve the problem)
-----
While the above application problem looks more vivid than the plain one, it is in fact challenging
to verify its rationale and correctness. For instance, is the problem natural? Is the context correct?
Note that GPT-4 tends to first compute the answers to the original problem before generating a
new one, which may accumulate errors including problem understanding, application generation,
and computation. Therefore, while we point the feasibility of such practice, this in current stage is
difficult to verify and should be left for future work.
C PROOF
**Theorem C.1. Given a tree-based DAG with depth d and width w, if the operation set for non-**
leaf nodes has k distinct operations and the value set for leaf nodes contains n distinct values, the
probability that two independently generated DAGs are identical is: P = _k_ _w[d]w[−]−[1]1−1_ _× n[w][d][−][1]_ [][−][1] _._
_Proof. To determine the overall probability, we analyze the likelihood at each depth and then multi-_
ply these probabilities. For depth i, the number of nodes is w[i][−][1].
For depth i, 1 ≤ _i ≤_ _d −_ 1. Since these nodes are non-leaf nodes, the probability they are identical
in two independently generated DAGs is the likelihood all of them have the same operations: pi =
1
_k[wi][−][1][ .]_
For the leaf nodes at depth d, the probability that they are the same across two DAGs is: pd =
1
_n[wd][−][1][ .]_
Thus, the overall probability P that two DAGs are identical is: P = _i=1_ _[p][i][ ×][ p][d][.]_
Substituting the above expressions and simplifying gives the result: P[Q] =[d][−][1] _k_ _w[d]w[−]−[1]1−1_ _n[w][d][−][1]_ [][−][1] _._
_×_
Note: We consider two trees to be distinct even if they only differ in the order of operations. For
instance, the tree representing 3×5 is considered different from the tree representing 5×3. Excluding
such cases may be non-trivial and is unlikely to significantly affect the odds.
**Theorem C.2. Given a general DAG with n nodes where each node has a minimum of l ≥** 1
links, the lower bound of the probability that two randomly selected DAGs of this configuration are
identical is ((n − 1)!)[−][1].
_Proof. Consider a DAG where each node has exactly one outgoing link. The first node can be_
connected to any of the remaining n − 1 nodes. Subsequently, the second node can connect to any
of the remaining n − 2 nodes, excluding the one already connected to the first node. Following this
logic, the third node can connect to any of the n − 3 unconnected nodes, and so on.
Thus, the total number of distinct DAGs that can be constructed under these constraints is given by:
(n − 1) × (n − 2) × . . . × 2 × 1 = (n − 1)!
Given two randomly chosen DAGs of this kind, the likelihood they are identical is the inverse of the
number of unique DAGs: (n−11)!
This probability serves as the lower bound when considering the general case of DAGs with nodes
having a minimum of l ≥ 1 links, hence proving the theorem.
D DETAILS OF EXPERIMENTS
D.1 EXPERIMENT ENVIRONMENT
All experiments are conducted on a workstation equipped with an NVIDIA V100 GPU with 16GB
memory and A100 GPU with 80GB memory. For GPT-3.5-Turbo and GPT-4, we use OpenAI’s
-----
API for inference, the versions are gpt-3.5-turbo-0613 and gpt-4-0613. For the Llama2 models, we
downloaded from the Llama2 github repository[3] and follow the instruction[4] to convert them into
huggingface models. For Vicuna-13B-v1.3, we downloaded it from its github repository[5]. The
remaining models can be downloaded directly via huggingface.
D.2 PROMPTS
- Arithmetic:
Here is a description of an arithmetic problem:
_{}_
Compute the result of {}. If the solution cannot be calculated,
answer ’N/A’. Ensure your result is within a relative precision of
0.0001 (or 0.01%) compared to the ground truth value. Ensure your
final result begins with ’<<<’ and ends with ’>>>’, for example, if
the answer is 1, your final result should be <<<1>>>.
- Linear Equation:
Given the following linear equation system with two variables:
_{}_
Determine the values of x and y. Ensure your results are within a
relative precision of 0.001 (or 0.1%) compared to the ground truth
values. Your response should be formatted as: <<<x’s value y’s
value>>>, e.g., if x=1 and y=2, then it should be <<<1 2>>>
- Boolean Logic:
Here is a description of a boolean logic problem:
_{}_
Compute the result of {}. If the solution can not be calculated,
answer ’N/A’. Ensure your final result begins with ’<<<’ and ends
with ’>>>’, for example, if the answer is True, your final result
should be <<<True>>>.
- Deductive Logic:
Here is a description of a deductive logic problem:
_{}_
The symbol ’->’ represents a deductive relationship, e.g., A -> B
implies that if A is true, then B is true. If A is false, B’s truth
value remains undetermined (N/A). Deduce the result of {}. If the
solution can not be deduced, answer ’N/A’. Ensure your final result
begins with ’<<<’ and ends with ’>>>’, for example, if the answer is
True, your final result should be <<<True>>>.
- Abductive Logic:
Here is a description of an abductive logic problem:
_{}_
The symbol ’->’ represents a deductive relationship, e.g., A -> B
implies that if B is false, then A is false. If B is true, A’s
3https://github.com/facebookresearch/llama
4https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/convert llama weights to hf.py
5https://github.com/lm-sys/FastChat
-----
truth value remains undetermined (N/A). If the solution can not be
abduced, answer ’N/A’. Ensure your final result begins with ’<<<’
and ends with ’>>>’, for example, if the answer is True, your final
result should be <<<True>>>.
- Reachability:
Given a directed graph:
_{}_
Respond with either ’<<<True>>>’ if reachable, or ’<<<False>>>’
otherwise.
- Max Sum Path:
Given a directed graph with values assigned to each node:
_{}_
For exmaple, the value of the path A->B->C is obtained by summing
the values of nodes A, B, and C. Please format your response
as <<<Answer>>>. For example, if the answer is 1, it should be
presented as <<<1>>>.
D.3 EVALUATION SET
We categorize tasks into four complexity levels, denoted as D1 to D4. For tasks reliant on general
Directed Acyclic Graphs (DAGs), the node count is set to 7, 10, 15, 20. Each of these nodes possesses a maximum link range of 3, 4, 6, 8 and a minimum link count of 1. Conversely, for tasks
that utilize tree-based DAGs, the tree depths and widths are defined as (2, 2), (3, 2), (3, 3), (4, 2), in
order of increasing complexity.
The range of these datasets progresses from simple to intricate. To illustrate, an arithmetic problem
with a tree depth of 2 represents a basic two-variable arithmetic computation. In contrast, a task
with a tree depth of 4 exhibits heightened complexity, necessitating multiple inferential steps for
resolution.
D.4 DETAILS OF EXPERIMENT RESULTS
We do not report the results of Flan-T5-large, phi-1.5, WizardMath-13B, and Xwin-13B since their
performance is almost 0 even on simplest evaluation sets generated by our DYVAL. Therefore, we
extensively run the results of four remaining models: Vicuna-13B-v1.3, Llama2-13B-chat, GPT-3.5Turbo, and GPT-4. Table 2, 3, and 4 report the detailed results (average±standard error) of these
models in different complexity (D1∼D4) and different description generation orders (topological,
reversed topological, and random orders).
In the reachability task, as task difficulty escalated, Llama2-13B-chat’s performance paradoxically
improved. Upon investigation, Llama2-13B-chat essentially resorted to random guessing across
datasets. The proportion of ’True’ answers increased (from 40% in D1 to 60% in D3), with
’False’ responses being nearly absent. The remainder were non-responses, thus elevating the overall
accuracy. The observation is similar to those made in Sec.4.4 where we investigated the influence
of different model size.
Further, generation description order affects outcomes: in the reachability task, GPT-4’s accuracy
drops by 13.67% when given reversed order compared to topological order. See Appendix D.4 for
details of experiment results.
D.5 DETAILS OF CASE STUDY
We select 20 failure cases of the most challenging datasets of arithmetic, deductive logic, abductive
logic, and reachability of GPT-4.
-----
Table 2: Results for Mathematic Tasks
Task Dataset GPT4 ChatGPT Llama2-13b-chat Vicuna-13b-v1.3
Topo Reversed Rand Topo Reversed Rand Topo Reversed Rand Topo Reversed Rand
D1 98.00±0.00 100.00±0.00 99.00±1.00 95.00±0.40 99.53±0.23 97.27±0.90 12.33±0.90 38.67±1.86 24.20±1.93 2.73±0.64 0.53±0.58 2.40±0.69
D2 94.17±1.15 95.67±1.04 95.50±1.00 90.47±1.17 92.27±0.12 92.07±0.31 5.73±1.01 3.00±0.87 4.60±0.35 1.53±0.46 0.07±0.12 0.60±0.20
D3 85.83±1.89 87.67±1.61 84.35±2.26 76.20±2.80 78.20±3.41 78.47±3.83 1.07±0.12 2.47±0.23 3.07±0.76 1.13±0.31 0.07±0.12 0.20±0.00
D4 79.33±1.61 81.33±1.89 77.67±2.57 72.40±1.51 72.73±1.68 69.40±2.25 2.80±0.53 0.80±0.35 1.20±0.69 0.20±0.20 0.07±0.12 0.00±0.00
D1 56.33±1.15 58.50±0.00 56.33±3.01 36.20±1.04 36.20±2.42 36.27±2.66 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00
D2 42.67±2.36 42.17±1.89 43.00±2.65 27.67±1.75 30.87±1.72 29.60±2.55 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00
D3 44.33±2.52 43.17±6.60 43.83±2.93 19.40±1.06 29.67±1.29 23.87±2.10 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00
D4 38.83±4.25 37.17±3.82 34.00±1.73 13.80±1.06 21.07±0.50 14.93±2.05 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00
Table 3: Results for Logical Reasoning Tasks
Arithmetic
Linear
Equation
Task Dataset GPT4 ChatGPT Llama2-13b-chat Vicuna-13b-v1.3
Topo Reversed Rand Topo Reversed Rand Topo Reversed Rand Topo Reversed Rand
D1 100.00±0.00 100.00±0.00 100.00±0.00 99.80±0.20 99.87±0.23 97.60±0.53 25.33±1.17 12.73±0.23 19.53±1.33 77.93±1.47 84.93±0.12 81.13±0.12
D2 100.00±0.00 99.33±0.58 100.00±0.00 98.80±0.20 99.40±0.20 96.80±0.40 7.87±0.61 17.00±1.40 18.67±0.70 43.00±1.00 68.40±2.12 53.93±1.55
D3 97.00±1.00 100.00±0.00 100.00±0.00 99.60±0.35 98.00±0.69 92.93±1.10 13.53±1.55 20.07±1.51 17.93±0.50 28.93±2.19 42.47±2.10 39.67±2.58
D4 96.00±2.00 100.00±0.00 99.67±0.58 99.40±0.20 95.47±0.23 90.47±0.64 10.87±0.42 13.93±1.30 16.33±0.99 29.20±1.59 29.73±2.12 29.80±1.25
D1 100.00±0.00 88.17±1.26 95.17±1.53 81.87±0.76 82.47±1.42 81.53±2.72 45.40±1.25 56.27±1.03 49.13±0.42 11.87±0.31 44.60±1.11 20.73±0.99
D2 98.50±1.50 92.50±0.87 97.17±1.61 64.60±1.60 65.93±1.14 63.73±3.42 43.60±2.60 34.47±2.05 43.07±3.14 48.00±2.31 38.73±2.05 44.87±0.61
D3 98.17±1.53 87.83±2.52 98.33±1.04 63.47±2.48 61.60±2.80 63.33±1.86 26.60±1.91 33.47±1.68 26.27±1.21 46.67±1.75 45.47±2.72 34.67±2.21
D4 96.17±1.04 84.33±1.44 90.67±5.03 56.40±1.78 57.33±1.30 56.47±3.00 20.60±1.56 29.20±1.59 20.60±2.69 38.07±1.15 37.40±1.22 33.40±3.17
D1 93.50±0.50 83.33±3.33 91.00±1.00 37.93±2.14 49.33±3.59 38.07±2.61 3.73±0.23 0.00±0.00 1.73±0.76 56.40±2.25 31.60±1.22 45.53±2.10
D2 78.83±6.37 48.50±5.57 63.50±4.09 53.47±2.50 59.80±3.41 56.60±3.36 21.47±1.17 10.53±1.42 17.67±1.86 19.80±0.20 25.47±1.72 22.00±1.39
D3 64.67±5.51 49.83±3.18 58.50±3.28 56.13±3.06 60.80±1.06 57.87±2.81 12.60±1.51 7.60±2.25 8.07±0.95 20.40±1.31 14.80±0.92 17.20±0.87
Boolean
Logic
Deductive
Logic
Abductive
Logic
Here we present one failure case for each error type.
**Partial calculation error** It has been observed that GPT-4, in certain situations, commits errors
in intermediate computational steps, while it maintains correctness in the remaining steps. This
characteristic anomaly is not isolated to complex calculations. In fact, not only complex calculation,
seemingly straightforward calculations such as can be incorrectly computed. This observed behavior
is consistent with the findings presented by (Dziri et al., 2023), where they highlighted that low-level
learning models (LLMs) occasionally produce results that are only partially accurate, particularly in
the realm of multi-digit multiplication.
**Input:**
Here is a description of an arithmetic problem:
The value of aaj is 7.
aak gets its value by squaring the value that aaj has.
The value of aah is 6.
The value of aag is 2.
aai gets its value by dividing the value of aag by those of aah.
aan gets its value by multiplying together the value of aai and aak.
The value of aaa is 6.
aab gets its value by squaring the value that aaa has.
The value of aac is 8.
The value of aad is 1.
Table 4: Results for Algorithm Tasks
Task Dataset GPT4 ChatGPT Llama2-13b-chat Vicuna-13b-v1.3
Topo Reversed Rand Topo Reversed Rand Topo Reversed Rand Topo Reversed Rand
D1 83.67±1.15 92.67±1.15 85.33±3.06 59.53±0.76 63.87±1.51 63.40±2.42 21.60±1.20 23.20±0.80 26.87±1.47 11.47±1.29 29.80±2.03 23.53±2.20
D2 85.00±0.00 91.00±3.00 83.00±2.00 53.53±3.97 56.73±2.81 54.27±1.79 34.60±1.40 26.87±1.27 26.27±1.17 12.07±0.58 31.73±0.64 21.73±0.50
D3 68.17±2.93 77.67±0.58 67.67±2.31 49.67±2.55 57.53±1.90 53.73±3.13 39.33±2.08 39.33±1.80 37.47±1.01 13.60±1.00 29.67±3.75 21.80±2.88
D4 63.00±1.00 76.67±0.58 74.33±2.52 49.40±2.71 59.13±3.49 52.33±2.89 33.67±2.97 41.53±1.22 38.60±0.92 10.80±0.72 30.73±1.53 22.00±1.64
D1 37.33±5.86 30.67±6.11 26.67±6.03 29.47±2.08 29.67±2.77 25.27±0.83 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00
D2 38.67±8.14 27.00±8.54 25.67±5.69 14.20±1.25 13.33±2.37 11.33±0.76 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00
D3 21.33±4.04 17.00±5.00 16.67±4.16 6.40±1.64 8.40±1.51 7.60±0.72 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00
D4 16.67±2.52 12.00±2.00 10.67±0.58 5.07±0.76 6.20±1.11 6.67±1.33 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00 0.00±0.00
Reachability
Max Sum
Path
-----
aae gets its value by dividing the value of aac by those of aad.
aaf gets its value by dividing the value of aab by those of aae.
aao gets its value by adding together the value of aaf and aan.
Compute the result of aao. If the solution cannot be calculated,
answer ’N/A’. Ensure your result is within a relative precision of
0.0001 (or 0.01%) compared to the ground truth value. Ensure your
final result begins with ’<<<’ and ends with ’>>>’, for example, if
the answer is 1, your final result should be <<<1>>>.
**GPT-4’s response:**
First, let’s calculate the values step by step:
aaj = 7
aak = aaj[2] = 7[2] = 49
aah = 6
aag = 2
aai = aag / aah = 2 / 6 = 0.3333
aan = aai * aak = 0.3333 * 49 = 16.3167 (should be 16.3317)
aaa = 6
aab = aaa[2] = 6[2] = 36
aac = 8
aad = 1
aae = aac / aad = 8 / 1 = 8
aaf = aab / aae = 36 / 8 = 4.5
aao = aaf + aan = 4.5 + 16.3167 = 20.8167
So, the result of aao is 20.8167.
Therefore, the final result is <<<20.8167>>>.
**Ground truth: 20.8333**
**Incorrect reasoning** When tasked with reasoning, GPT-4 can occasionally falter in rule interpretation, especially in abductive logic task.
**Input:**
Here is a description of an abductive logic problem:
(aaa and aab) -> aac.
(aae or aaf) -> aag.
(aah or aai) -> aaj.
(aag or aaj) -> aak.
(NOT aac) -> aad.
(aad or aak) -> aan.
Given aan is False, what is the value of aab?
The symbol ’->’ represents a deductive relationship, e.g., A -> B
implies that if B is false, then A is false. If B is true, A’s truth
value remains undetermined (N/A). If the solution can not be abduced,
answer ’N/A’. Ensure your final result begins with ’<<<’ and ends with
-----
’>>>’, for example, if the answer is True, your final result should be
<<<True>>>.
**GPT-4’s response:**
Given aan is False, it means either aad or aak must be False (both aad
and aak should be False) (since aad or aak -> aan).
If aak is False, then either aag or aaj must be False (since aag or
aaj -> aak).
If aag is False, then either aae or aaf must be False (since aae or
aaf -> aag).
If aaj is False, then either aah or aai must be False (since aah or
aai -> aaj).
However, none of these conditions provide any information about aab.
Therefore, the value of aab cannot be abduced from the given
information.
So, the answer is <<<N/A>>>.
**Ground truth: N/A**
**Self Contradiction** GPT-4, on occasion, presents self-contradictory assumptions during the same
inference in an abductive logic task.
**Input:**
Here is a description of an abductive logic problem:
(aah or aai) -> aaj.
(NOT aaj) -> aak.
(aad or aae) -> aaf.
(aaa or aab) -> aac.
(aac or aaf) -> aag.
(aag or aak) -> aan.
Given aan is False, what is the value of aah?
The symbol ’->’ represents a deductive relationship, e.g., A -> B
implies that if B is false, then A is false. If B is true, A’s truth
value remains undetermined (N/A). If the solution can not be abduced,
answer ’N/A’. Ensure your final result begins with ’<<<’ and ends with
’>>>’, for example, if the answer is True, your final result should be
<<<True>>>.
**GPT-4’s response:**
Since aan is False, at least one of aag or aak must be False.
However, we don’t know which one is False, so we can’t determine the
value of aah. (This inference step contradicts to the former example,
where GPT-4 assumes aah is False.) Therefore, the answer is ’N/A’.
<<<N/A>>>
**Ground truth: N/A**
**Unsubstaintiated response** A recurring observation in both reasoning and algorithmic tasks
is GPT-4’s propensity to provide answers without accompanying justifications. These succinct,
answer-only responses raise concerns about the depth of GPT-4’s understanding and whether its approach is grounded in actual comprehension or simply a result of memorization. This error mostly
happens in algorithm tasks and occasionally happens in logical reasoning tasks.
-----
**Input:**
Here is a description of an abductive logic problem:
(aaa or aab) -> aac.
(aad or aae) -> aaf.
(aac or aaf) -> aag.
(NOT aah) -> aai.
(NOT aaj) -> aak.
(aai or aak) -> aan.
(aag or aan) -> aao.
Given aao is False, what is the value of aaa?
The symbol ’->’ represents a deductive relationship, e.g., A -> B
implies that if B is false, then A is false. If B is true, A’s truth
value remains undetermined (N/A). If the solution can not be abduced,
answer ’N/A’. Ensure your final result begins with ’<<<’ and ends with
’>>>’, for example, if the answer is True, your final result should be
<<<True>>>.
**GPT-4’s response:**
<<<N/A>>>
**Ground truth: N/A**
**Instructional oversight** Interestingly, while GPT-4 often computes accurately, it sometimes falters when it comes to precisely following the output instructions as described in given prompts.
This discrepancy points to a potential gap in the model’s ability to parse and adhere to intricate
instructional details, even when the underlying computational capability is sound.
**Input:**
Here is a description of an arithmetic problem:
The value of aac is 6.
The value of aai is 6.
The value of aag is 8.
aan gets its value by subtracting the sum of the values of aak and aae
from the value of aah.
aao gets its value by multiplying together the value of aae and aan
and aae.
aah gets its value by multiplying together the value of aaf and aag
and aae.
The value of aap is 1.
The value of aaa is 6.
aad gets its value by taking the square root of the value that aac
has.
The value of aaj is 4.
aak gets its value by subtracting the sum of the values of aaj and aae
from the value of aai.
The value of aaq is 7.
aar gets its value by dividing the value of aap by those of aaq.
-----
aab gets its value by squaring the value that aaa has.
The value of aaf is 10.
aae gets its value by multiplying together the value of aab and aad
and aai.
Compute the result of aao. If the solution cannot be calculated,
answer ’N/A’. Ensure your result is within a relative precision of
0.0001 (or 0.01%) compared to the ground truth value. Ensure your
final result begins with ’<<<’ and ends with ’>>>’, for example, if
the answer is 1, your final result should be <<<1>>>.
**GPT-4’s response:**
Let’s solve this step by step:
1. aac = 6
2. aai = 6
3. aag = 8
4. aaa = 6
5. aad = sqrt(aac) = sqrt(6)
6. aaj = 4
7. aaq = 7
8. aap = 1
9. aaf = 10
10. aab = aaa[2] = 6[2] = 36
11. aae = aab * aad * aai = 36 * sqrt(6) * 6
12. aak = aai - (aaj + aae) = 6 - (4 + 36 * sqrt(6) * 6)
13. aah = aaf * aag * aae = 10 * 8 * 36 * sqrt(6) * 6
14. aan = aah - (aak + aae) = 10 * 8 * 36 * sqrt(6) * 6 - (6 - (4 +
36 * sqrt(6) * 6) + 36 * sqrt(6) * 6)
15. aao = aae * aan * aae = (36 * sqrt(6) * 6) * (10 * 8 * 36 *
sqrt(6) * 6 - (6 - (4 + 36 * sqrt(6) * 6) + 36 * sqrt(6) * 6)) * (36 *
sqrt(6) * 6)
This is a complex calculation and it’s not possible to simplify it
further. However, it’s clear that the result will be a very large
number. Since the problem doesn’t provide any constraints on the
values of the variables, we can’t say that the solution is ’N/A’.
(The equation is correct however GPT-4 did not simplify it. In our
experiment, we found that GPT-4 can handle large numbers, so the error
does not originate from them.)
So, the result of aao is a very large number that can’t be simplified
further.
**Ground truth: 11848342359.78579**
D.6 DETAILS OF VARING COMPLEXITY CONSTRAINTS
As shown in Figure 5, we systematically vary the levels of complexity in GPT-3.5-Turbo by adjusting
individual constraints while keeping others constant. Specifically, we explore how performance
metrics evolve as we incrementally adjust depth, width, #nodes, #max links, the number of extra
links, and the quantity of random descriptions across arithmetic, boolean logic, and deductive logic
tasks. To comprehensively evaluate the impact of complexity constraints, various parameters were
meticulously adjusted. The following elucidates the configurations employed:
-----
Table 5: Results of GPT-3.5-Turbo with prompt engineering
techniques on the toughest evaluation sets generated by DYVAL
(D4).
Linear Deductive Abductive Max Sum
Prompt engineering Arithmetic Equation Logic Logic Reachability Path
Vanilla 42.13 14.93 56.40 54.33 49.40 5.07
CoT (Wei et al., 2022) 42.33 21.93 52.93 43.73 47.73 1.93
Fewshot (Brown et al., 2020) **47.86** 2.40 35.93 41.60 **81.80** **12.20**
Least2most (Zhou et al., 2023b) 36.73 12.47 44.07 38.80 76.53 8.07
APE (Zhou et al., 2023d) 45.20 **23.40** 44.67 53.13 62.80 8.87
SKiC (Chen et al., 2023) 32.07 13.70 **63.00** **78.27** 71.40 11.80
Table 6: Results of Llama
2 with different sizes on DYVAL-generated evaluation samples (D1).
Size Arithmetic Boolean logic Reachability
7b 13.07 28.93 29.53
13b 24.20 19.53 26.53
70b 29.71 28.30 47.38
- Depth Constraint: Maintaining the width at 2, with neither the addition of random links nor
the embedding of extra descriptions (both set to 0), the depth was systematically varied, with
values set to 2, 3, 4, 5, and 6.
- Width Constraint: With a fixed depth of 3, and with the addition of random links and embedding of extra descriptions both neutralized to 0, the width was tested with the values 2, 3, 4, 5,
and 6.
- Random Link Addition Constraint: For this, a depth of 4 and a width of 2 were maintained,
with extra descriptions set to 0. The number of random links introduced varied as 0, 1, 2, 3,
and 4. It should be highlighted that due to the inherent acyclic constraint, certain nodes may
preclude the addition of extra links.
- Embedding Extra Descriptions: With a depth and width fixed at 4 and 2, respectively, and no
addition of random links (set to 0), the levels of embedded extra descriptions were calibrated
to 0, 1, 2, 3, and 4.
Across these variations, our results consistently underscore a notable trend: as the tasks become
more intricate through the augmentation of these complexity parameters, LLMs progressively struggle, underscoring the inherent challenges posed by increasing task intricacy. It can be observed that
depth is the most influential complexity constraint of tree-based DAGs, indicates that LLMs struggle
to deal with problems that requires more inference steps.
D.7 DETAILS OF PROMPT ENGINEERING
We explored five prompting techniques to evaluate their potential impact on our most challenging
datasets (excluding boolean logic since GPT-3.5-Turbo achieved comparable results on most challenging datasets): Zeroshot-CoT (Wei et al., 2022), Few-shot (3-shot in our experiments) (Brown
et al., 2020), Least-to-most (Zhou et al., 2023b), automatic prompt engineering (APE) (Zhou et al.,
2023d), and skill-in-context (SkiC) (Chen et al., 2023). The details of these techniques are as follows:
- Zeroshot-CoT: An approach that allows models to generalize from their pre-training without
explicit examples in the target task (Wei et al., 2022).
- Fewshot (3-shot in our experiments): Provides the model with a small number of examples
from the target task to aid in understanding and generalizing to the broader task (Brown et al.,
2020).
- Least to Most Prompting: This technique incrementally provides more specific prompts to
guide the model’s responses, adapting the prompt based on the difficulty level of the problem
(Zhou et al., 2023b).
- Automatic Prompting Engineering (APE): A method where prompts are automatically engineered to elicit the desired response from the model, often maximizing its performance (Zhou
et al., 2023d).
- Skill-in-Context (SKiC): This method seeks to understand a model’s inherent skills and utilize
them in a specific context to improve its outputs (Chen et al., 2023).
D.8 HUMAN STUDY
We conducted our human study by obeying the local laws and regulations. The demographics of the
human subjects are shown in Table 7.
-----
Table 7: Demographics of the recruited human subjects.
|Sex|Age|Degree|
|---|---|---|
|Male: 63 (63%) Female: 37 (37%)|20-25: 75 (75%) 26-36: 25 (25%)|Bachelor: 62 (62%) Master: 38 (38%)|
|---|---|---|
E DETAILS OF FINE-TUNING
E.1 CONSTRUCTING TRAINING DATA
The training data includes the problems and their corresponding inference steps. Constructing inference steps is similar to the DAG description. For mathematics tasks and logical reasoning tasks,
which utilize T-DAG, we follow the topological order to describe how the intermediate gets its value
from its children. For algorithm tasks, the inference are indeed an algorithm, thus, we formalize each
step in the algorithm into natural language, which finally form the inference steps. Following list
presented a training sample of each task. Note that the inference steps for each task are also highly
flexible and can be augmented with more details, and we believe that the more detailed inference
steps will benefit more.
- Arithmetic:
**Input:**
Here is a description of an arithmetic problem:
The value of aae is 2.
The value of aad is 4.
aaf gets its value by dividing the value of aad by those of aae.
aag gets its value by taking the square root of the value that aaf
has.
Compute the result of aag. If the solution cannot be calculated,
answer ’N/A’. Ensure your result is within a relative precision of
0.0001 (or 0.01%) compared to the ground truth value. Ensure your
final result begins with ’<<<’ and ends with ’>>>’, for example, if
the answer is 1, your final result should be <<<1>>>.
**Inference steps:**
aae is 2.0
aad is 4.0
aaf = aad / aae = 4.0 / 2.0 = 2.0
aag = sqrt aaf = sqrt(2.0) = 1.41421356
Thus, the answer is <<<1.41421356>>>
- Linear Equation:
**Input:**
Given the following linear equation system with two variables:
aag0 x + -6 y = -6
-2 x + 6 y = 0
The calculation of aag0 is defined as:
The value of aab0 is 10.
The value of aaa0 is 6.
aac0 gets its value by adding together the value of aaa0 and aab0.
The value of aad0 is 6.
The value of aae0 is 1.
aaf0 gets its value by multiplying together the value of aad0 and
-----
aae0.
aag0 gets its value by multiplying together the value of aac0 and
aaf0.
Determine the values of x and y. Ensure your results are within
a relative precision of 0.001 (or 0.1%) compared to the ground truth
values. Your response should be formatted as: <<<x’s value y’s
value>>>, e.g., if x=1 and y=2, then it should be <<<1 2>>>
**Inference steps:**
Let’s first solve the coefficients of the linear equation.
aab0 is 10.0
aaa0 is 6.0
aac0 = aaa0 + aab0 = 6.0 + 10.0 = 16.0
aad0 is 6.0
aae0 is 1.0
aaf0 = aad0 * aae0 = 6.0 * 1.0 = 6.0
aag0 = aac0 * aaf0 = 16.0 * 6.0 = 96.0
Thus, aag0 = 96.0
Next, solve the linear equation:
96.0 x + -6.0 y = -6.0
-2.0 x + 6.0 y = 0.0
To eliminate ’y’, multiply the first equation by 6 and the
second equation by -6. This makes the coefficients of ’y’ equal.
Subtracting the second equation from the first then gives: 564.0x =
-36.0.
From the equation 564.0x = -36.0, we can solve for x.
Solving for x, we get x = -0.06382979.
Substituting x = -0.06382979 into the first original equation, we
get: -6.0y = 0.12765957, which gives y = -0.0212766.
Thus, the answer is <<<-0.06382979 -0.0212766>>>
- Boolean Logic:
**Input:**
Here is a description of a boolean logic problem:
aaa is False.
aab is True.
The value of aac equals to (aaa OR aab).
aad is False.
aae is False.
The value of aaf equals to (aad AND aae).
The value of aag equals to (aac AND aaf).
Compute the result of aag. If the solution can not be calculated,
answer ’N/A’. Ensure your final result begins with ’<<<’ and ends
with ’>>>’, for example, if the answer is True, your final result
should be <<<True>>>.
**Inference steps:**
aaa is False.
aab is True.
aac = (aaa OR aab) = (False OR True) = True.
aad is False.
aae is False.
aaf = (aad AND aae) = (False AND False) = False.
-----
aag = (aac AND aaf) = (True AND False) = False.
Thus, the answer is <<<False>>>
- Deductive Logic:
**Input:**
Here is a description of a deductive logic problem:
aaa is False.
aab is False.
(aaa or aab) -> aac.
(NOT aac) -> aad.
The symbol ’->’ represents a deductive relationship, e.g., A -> B
implies that if A is true, then B is true. If A is false, B’s truth
value remains undetermined (N/A). Deduce the result of aad. If the
solution can not be deduced, answer ’N/A’. Ensure your final result
begins with ’<<<’ and ends with ’>>>’, for example, if the answer is
True, your final result should be <<<True>>>.
**Inference steps:**
aaa is False.
aab is False.
(aaa OR aab) = (False OR False) -> aac.
The premise (False OR False) is False, thus, the value of aac cannot
be deduced and is set to N/A.
(NOT aac) = (NOT N/A) -> aad.
The value of aad cannot be deduced, and is set to N/A.
Thus, the answer is <<<N/A>>>
- Abductive Logic:
**Input:**
Here is a description of an abductive logic problem:
(aad or aae) -> aaf.
(aaa or aab) -> aac.
(aac or aaf) -> aag.
Given aag is False, what is the value of aab?
The symbol ’->’ represents a deductive relationship, e.g., A -> B
implies that if B is false, then A is false. If B is true, A’s
truth value remains undetermined (N/A). If the solution can not be
abduced, answer ’N/A’. Ensure your final result begins with ’<<<’
and ends with ’>>>’, for example, if the answer is True, your final
result should be <<<True>>>.
**Inference steps:**
(aac OR aaf) -> aag = False.
Given aag is False, the value of premise (aac OR aaf) is False,
thus, the value of aac is abduced as False.
(aaa OR aab) -> aac = False.
Given aac is False, the value of premise (aaa OR aab) is False,
thus, the value of aab is abduced as False.
Thus, the answer is <<<False>>>
- Reachability:
-----
**Input:**
Given a directed graph:
aag points to: (None).
aaf points to: (None).
aaa points to: (aag, aaf).
aac points to: (aaa).
aab points to: (aaa).
aad points to: (aaa).
aae points to: (aad).
Can aag be reached starting from aaf?
Respond with either ’<<<True>>>’ if reachable, or ’<<<False>>>’
otherwise.
**Inference steps:**
Starting the search process from node aaf with the goal to reach
node aag.
Checking node aaf.
Node aaf has no unvisited children. Moving back.
Exhausted all possible paths without reaching node aag.
Thus, the answer is <<<False>>>
- Max Sum Path:
**Input:**
Given a directed graph with values assigned to each node:
aag points to: (None).
aaf points to: (aag).
aae points to: (aaf, aag).
aab points to: (aaf).
aad points to: (aab, aae).
aac points to: (aag, aad).
aaa points to: (aac, aag, aae).
The value of aaa is 3
The value of aab is 2
The value of aaf is 5
The value of aag is 5
The value of aae is 1
The value of aad is 7
The value of aac is 2
What’s the maximum sum path from aab to aae?
For exmaple, the value of the path A->B->C is obtained by summing
the values of nodes A, B, and C. Please format your response
as <<<Answer>>>. For example, if the answer is 1, it should be
presented as <<<1>>>.
**Inference steps:**
Starting the search for the maximum sum path from node aab to node
aae.
Reaching node aab with current sum of 2.
Now, we explore the children of node aab: aaf.
Reaching node aaf with current sum of 7.
Now, we explore the children of node aaf: aag.
Reaching node aag with current sum of 12.
Node aag has no children. Moving back.
There is no path from node aab to node aae.
Thus, the answer is <<<N/A>>>
-----
E.2 TRAINING DATA AND TESTING DATA
E.2.1 TRAINING DATA
For mathematical tasks and logical reasoning tasks that utilize T-DAGs, we construct four types of
training datasets. Each dataset consists of 500 training samples. All of these types have a depth of
3. The settings are as follows:
1. width=2, add random links=0, embed random descs=0,
2. width=2, add random links=1, embed random descs=0,
3. width=2, add random links=1, embed random descs=1,
4. width=3, add random links=0, embed random descs=0.
For algorithm tasks, two types of training datasets are generated:
1. num nodes=7, max links per node=3,
2. num nodes=10, max links per node=4.
E.2.2 TESTING DATA
We create three types of testing data:
1. In-Distribution (ID) Test Set: The difficulty level matches that of the training set.
- For T-DAGs: depth=4, width=2, with no extra links and random descriptions.
- For G-DAGs: num nodes=15 with max links=6.
2. Out-of-Distribution (OOD) Test Set:
- For T-DAGs: depth=4, width=2, without extra links and random descriptions.
- For G-DAGs: num nodes=15 with max links=6.
3. Out-of-Distribution-Hard (OOD-hard) Test Set:
- For T-DAGs: depth=4, width=2, with one extra link per node and one random description.
- For G-DAGs: num nodes=20 with max links=8.
Note that the definition of OOD in our tasks is mainly on the different complexities of the samples
that may come with more advanced structures or descriptions. For model evaluation, when using the
DYVAL generated testing data, a zero-shot setting was adopted. For existing benchmarks, few-shot
COT examples were provided in the context: 4 examples for GSM8K and SVAMP, 3 for FOLIO
and RACO, and 2 for DP and LCS. The results of evaluation in our tasks are presented in Figure 10.
E.3 RESULTS OF FINE-TUNING
We fine-tuned Llama2-13b-chat with LORA (Hu et al., 2022) for 3 epochs where the rank was 8,
the scaling factor was 16 and the drop out rate was 0.05. We used a 0.0003 learning rate with batch
size 128. Results on existing benchmarks of the fine-tuned model is in Figure 6 of the main paper.
Figure 10 displays the results after fine-tuning on our test datasets as described in Sec.E.2.2. The
performance of Llama2-13B-chat on tasks like boolean logic, deductive logic, and reachability significantly improves after fine-tuning on our dataset. However, noticeable gaps remain, particularly
in areas such as mathematic tasks, abductive logic and the max sum path.
F IMBALANCED GENERATED DATASET
Our algorithm can easily satisfy the balance requirement by meticulously controlling the flexible
dynamic generation process. For example, in reachability task, we can drop the generated evaluation
samples with ’False’ labels until we generate a sample with ’True’ label. We presented the results
of GPT-3.5-Turbo and GPT-4 in balanced datasets in Table 8. The results in balanced datasets are
similar to our initial findings: (1) GPT-3.5-Turbo consistently predicted all questions as “True.”,
resulted in a uniform accuracy rate of 50%. (2) GPT-4 demonstrated excellent performance. It
maintained significantly higher accuracy rates across all complexity levels.
-----
Vanilla-ID Vanilla-OOD Vanilla-OOD hard
Gain FT-ID Gain FT-OOD Gain FT-OOD hard
1.0
0.8
0.6
Accuracy 0.4
0.2
0.0 Arithmetic Linear Eq Bool Logic Deductive LogicAbductive Logic Reachability Max Sum Path
Figure 10: Fine-tuned results on ID and OOD sets of our tasks. For arithmetic, linear equation,
reachability and max sum path tasks, the vanilla accuracy is zero.
Table 8: Results of fine-tuned ChatGPT model on general natural langugage understanding tasks
Model ChatGPT GPT4
Complexity D1 D2 D3 D4 D1 D2 D3 D4
Balanced 50 50 50 50 84.54 79.03 73.5 72.41
Imbalanced 63.87 54.27 53.73 52.33 85.33 83 67.67 74.33
G GENERAL LANGUAGE UNDERSTANDING ABILITY AFTER FINE-TUNING
We fine-tuned GPT-3.5-turbo-0613 using our generated data on abductive logic and reachability
datasets, as GPT-3.5 performs worst on these two datasets. Specifically, we generated 100 samples
across complexity levels D1, D2, and D3 for each task. We compared the performance of original
and fine-tuned models on several benchmark tasks in GLUE dataset. The performance of abductive
logic and reachability task are tested on D4 task (different from fine-tuning dataset). As shown in
Table 9, performance on WNLI and QNLI datasets dropped for the fine-tuned model. However, finetuned model achieves better results on CoLA, QQP, and MRPC datasets. Despite the mixed results,
the overall improvement in several datasets suggests that fine-tuning on our generated datasets does
not necessarily hurt the general language understanding ability.
Table 9: Results of fine-tuned ChatGPT model on general natural langugage understanding tasks
Abductive Logic Reachability SST-2 CoLA WNLI QNLI QQP MRPC
GPT3.5 55.27 50.00 93.29 77.00 59.15 80.00 76.50 73.0
GPT3.5-FT 85.10 96.53 93.23 78.00 45.07 72.50 78.00 77.5
H FLEXIBILITY TO NATURAL LANGUAGE TASKS
Finally, we discuss the flexibility of DYVAL while the main focus of this paper is on reasoning tasks.
We show that DYVAL can be easily extended to natural language processing tasks using an initial
experiment on sentiment analysis.
Generally speaking, a natural language sentence can be expressed as a syntax tree, similar to DAGs.
However, generating sentences through direct syntax tree construction (which is similar to the construction of arithmetic task) presents notable challenges, primarily due to the need for grammatical
correctness and the inherent naturalness of these sentences. Nevertheless, DYVAL can still be applied to generate tasks in natural language by utilizing syntax tree templates extracted by existing
sentences. For each sentence in the SST-2 dataset, we initially employ GPT-3.5-Turbo to extract
its syntactic structure. Within each syntax tree (i.e., DAGs), we identify the elements that can be
modified: namely, nouns (such as names and places) and adjectives. GPT-3.5-Turbo is then used to
create five alternative candidates for each of these modifiable components, which are subsequently
replaced in an iterative fashion. Throughout this process, we continuously assess whether these
replacements alter the original semantic meaning of the sentence. Any changes that result in a semantic shift are discarded. Note that the graph cannot be randomly generated as the reasoning tasks
-----
since we need to constrain the naturalness and grammar correctness of the generated sentences. As
a remedy, the structure of the graph can be abstracted using the template sentences generated by
GPT-3.5-Turbo.
We generate three alternative versions for each sentence in the above process, forming our newly
generated dataset. We then evaluate the performance of both Flan-T5-large and Llama2-7b models,
using the original SST-2 dataset as well as our generated dataset for comparison. The results of these
evaluations are detailed in Table 10. It shows that using our generated samples, the performance
drops, indicating that we are creating challenging test sets. Note that this is an initial study and
extending DYVAL to NLP tasks is nontrivial that cannot be covered in this paper, but should be left
for future work.
Table 10: Results for Logical Reasoning Tasks
Flan-T5-large Llama2-7b
Origin **93.12** **90.37**
DyVal 86.46 72.03
-----
| [
"Jindong, Wang",
"Jiaao, Chen",
"Kaijie, Zhu",
"Neil Zhenqiang, Gong",
"Xing, Xie",
"Diyi, Yang"
] | 2023-10-13T00:00:00 | ICLR 2024 Spotlight | true | 0 | 0 | null | https://openreview.net/forum?id=gjfOL9z5Xr | https://arxiv.org/abs/2309.17167 | https://www.semanticscholar.org/paper/1d7f414983eb847c4618489baa44e99b01162f98 |
Efficient Model-agnostic Alignment via Bayesian Persuasion | With recent advancements in large language models (LLMs), alignment has emerged as an effective technique for keeping LLMs consensus with human intent. Current methods primarily involve direct training through Supervised Fine-tuning (SFT) or Reinforcement Learning from Human Feedback (RLHF), both of which require substantial computational resources and extensive ground truth data. This paper explores an efficient method for aligning black-box large models using smaller models, introducing a model-agnostic and lightweight Bayesian Persuasion Alignment framework. We formalize this problem as an optimization of the signaling strategy from the small model's perspective. In the persuasion process, the small model (Advisor) observes the information item (i.e., state) and persuades large models (Receiver) to elicit improved responses. The Receiver then generates a response based on the input, the signal from the Advisor, and its updated belief about the information item. Through training using our framework, we demonstrate that the Advisor can significantly enhance the performance of various Receivers across a range of tasks. We theoretically analyze our persuasion framework and provide an upper bound on the Advisor's regret, confirming its effectiveness in learning the optimal signaling strategy. Our Empirical results demonstrates that GPT-2 can significantly improve the performance of various models, achieving an average enhancement of 16.1% in mathematical reasoning ability and 13.7% in code generation. We hope our work can provide an initial step toward rethinking the alignment framework from the Bayesian Persuasion perspective. | An efficient method for aligning black-box large models using smaller models using smaller models is explored, introducing a model-agnostic and lightweight Bayesian Persuasion Alignment framework and an upper bound on the Advisor's regret is provided, confirming its effectiveness in learning the optimal signaling strategy. | ## Efficient Model-agnostic Alignment via Bayesian Persuasion
**Fengshuo Bai[1][,][2][,][3]** **Mingzhi Wang[2][,][†∗]** **Zhaowei Zhang[2][,][3][,][†]** **Boyuan Chen[2][,][†]** **Yinda Xu[1]**
**Ying Wen[1][,][‡]** **Yaodong Yang[2][,][‡]**
1Shanghai Jiao Tong University
2Institute for Artificial Intelligence, Peking University
3National Key Laboratory of General Artificial Intelligence, BIGAI
**Abstract**
With recent advancements in large language models (LLMs), alignment has
emerged as an effective technique for keeping LLMs consensus with human intent.
Current methods primarily involve direct training through Supervised Fine-tuning
(SFT) or Reinforcement Learning from Human Feedback (RLHF), both of which
require substantial computational resources and extensive ground truth data. This
paper explores an efficient method for aligning black-box large models using
smaller models, introducing a model-agnostic and lightweight Bayesian Persuasion Alignment framework. We formalize this problem as an optimization of the
signaling strategy from the small model’s perspective. In the persuasion process,
the small model (Advisor) observes the information item (i.e., state) and persuades
large models (Receiver) to elicit improved responses. The Receiver then generates
a response based on the input, the signal from the Advisor, and its updated belief
about the information item. Through training using our framework, we demonstrate
that the Advisor can significantly enhance the performance of various Receivers
across a range of tasks. We theoretically analyze our persuasion framework and
provide an upper bound on the Advisor’s regret, confirming its effectiveness in
learning the optimal signaling strategy. Our Empirical results demonstrates that
GPT-2 can significantly improve the performance of various models, achieving an
average enhancement of 16.1% in mathematical reasoning ability and 13.7% in
code generation. We hope our work can provide an initial step toward rethinking
the alignment framework from the Bayesian Persuasion perspective.
**1** **Introduction**
Recent years have witnessed increased attention and effort in aligning large language models (LLMs)
with human intentions and values [35, 26]. This alignment is facilitated by providing reliable
supervision through demonstrations [8, 45], reward signals [37], preferences [16, 40] or critiques
[42, 5], and by employing methods such as supervised learning (e.g., Supervised Fine-tuning, SFT)
or reinforcement learning (e.g., Reinforcement Learning from Human Feedback, RLHF) [37].
However, these methods, including RLHF, require multiple models and direct training of large
models, which significantly increases computational demands [40]. Moreover, Fine-tuning cannot
be applied to closed-source models, complicating output control for alignment with human intents.
Additionally, current alignment methods, like SFT and RLHF, face limitations when human evaluators
lack expertise in complex tasks [7, 43]. These challenges highlight the need for efficient, scalable
alignment strategies for both open-source and closed-source models. Therefore, our work aims to
address the above challenges by answering the following question:
_∗† Equal Contribution. ‡Correspondence to Yaodong Yang<[email protected]> and Ying Wen_
_<[email protected]>._
-----
"𝑣
𝑉(𝜇) Performance measured by the Advisor’s utility
𝑣(𝜇)'
co('𝑣)
𝒈: clarify the Advisor’s expected utility
problem objective. 𝜇[!] Distribution of 𝑐
Receiver’s belief is influenced by a signal 𝒈
𝜇["](𝑐) 𝜇 [!](𝑐)
…
information item 𝑐
Advisor (small model) Receiver (large model) Stronger Receiver
Figure 1: An illustration of our persuasion framework. The Receiver observes a signal g sent by the
Advisor and updates its belief from the prior distribution µ[0] to a posterior distribution µ[g]. The axes
depict the Advisor’s expected utility ˆv(µ) across various information distributions µ. In this context,
_co(ˆv) denotes the convex hull of ˆv, while V represents the concave closure of ˆv. Here, V (·) is the_
largest expected utility Advisor can achieve with any signal. From the Advisor’s perspective, the
Receiver’s performance is enhanced following persuasion.
**_Can we use a smaller model to influence the behaviors of larger models, thereby enabling_**
**_alignment and enhancing performance with little human supervision or feedback?_**
Inspired by Bayesian persuasion [30], we model the alignment problem as an information design
process, involving a protocol between a small model and a large model. Instead of training multiple
models as in techniques like RLHF, we only employ a small model with minimal supervision to learn
a signaling strategy that influences the behaviors of fixed large models. In this setup, large models are
treated as a black box. This modeling approach significantly reduces the demand for computational
resources by delegating alignment tasks to smaller models, leading to a more lightweight and
efficient framework. This makes it inherently suitable for a broader range of alignment scenarios,
accommodating tasks of varying difficulty and larger models.
In this work, we introduce a novel framework termed Bayesian Persuasion Alignment, as illustrated
in Figure 1. We frame alignment as a Bayesian Persuasion (BP) problem, wherein a smaller model
serves as the Advisor and a larger model acts as the Receiver. In this setup, the Advisor generates a
signal that is sent to the Receiver. Upon receiving this signal, the Receiver updates its beliefs and
produces a response. Our core insight is that a smaller model trained on supervision for optimal
signaling strategy can effectively persuade larger models, thereby improving the quality of their
responses, i.e., their outputs. This mechanism provides several advantages: (1) From the perspective
of information design, the well-defined signaling strategy of the Advisor ensures an increase in the
Advisor’s utility without decreasing the Receiver’s utility [30]. (2) The Advisor manipulates the
Receiver’s belief to enhance performance, significantly reducing the need for training resources while
ensuring alignment performance, making it an effective and parameter-efficient alignment strategy.
(3) Moreover, the learned signaling strategy can be applied to different Receivers, guaranteeing a
model-agnostic nature and making it easier to generalize to harder tasks.
To the best of our knowledge, BP Alignment is the first integration of Bayesian persuasion with the
alignment framework. Our main contributions can be summarized in three folds: First, we introduce
a parameter-efficient and model-agnostic alignment framework that trains a smaller model to enhance
the performance of various larger models. Second, we demonstrate that our persuasion framework
significantly improves the performance of various large models on mathematical problem-solving
and code-generation tasks. Specifically, the Advisor (Phi-2) enables significant enhancements, with
an average improvement of 22.5% on GSM8K [17], 39.0% on MATH [23], and a 24.7% increase on
-----
HumanEval [12]. Lastly, we theoretically analyze our framework and provide an upper bound on the
Advisor’s regret, indicating its effectiveness in learning the optimal signaling strategy.
**2** **Related Work**
In this section, we will review existing works on Scalable Oversight, AI Persuasion, and Eliciting
Latent Knowledge (ELK).
**Scalable Oversight. As models begin to achieve broadly human-level performance and take on more**
complex tasks that are difficult for humans to understand, providing continual, reliable feedback and
ensuring that the models’ behaviors align with human intents becomes challenging. This naturally
raises the crucial issue of scalable oversight: how can we provide supervisory signals to more
_powerful AI systems and ensure they are aligned with human intents? [36, 2, 26] Unlike current_
methods that focus on enhancing the capabilities of weak supervisors [14, 7, 31], our framework
addresses this challenge by transforming weak supervisors into persuaders and identifying the optimal
signal-sending strategy to effectively influence the behaviors of stronger models.
Our work also differs from weak-to-strong generalization [10] and similar alignment methods [27, 32].
These methods face a trade-off: the strong model may either mimic the weak model, reducing
performance or use its reasoning abilities to improve [10]. Additionally, they often rely heavily on
ground truth labels. In contrast, our approach uses small models as Advisors to elicit the capabilities
of stronger models without adding noisy labels. Guided by the information design principle, our
method scales to various stronger models and challenging tasks, minimizing the need for ground
truth. We evaluate our method on various mathematical problem-solving and code-generation tasks
using only GPT-2 as the Advisor. It achieves significant advancements, with an average improvement
of 9.5% on GSM8K, 22.6% on MATH, and 13.7% on HumanEval across a range of strong models.
**AI Persuasion. Persuasion is a dynamic game process in which one player (sender) influences the**
beliefs or actions of another player (receiver) by providing informative signals, thereby affecting the
outcomes for both players. AI persuasion can be categorized into two types based on the target: (1)
AI persuading humans to change their original viewpoints (i.e., Captology) [21, 47, 33, 19], and (2)
employing persuasive signals to change the behavior of AI systems. While most existing research has
concentrated on the former, studies on the latter remain relatively nascent. In this paper, we primarily
explore the latter category.
Zeng et al. [48] conducted a comprehensive review of decades of social science research and proposed
a taxonomy to automatically generate persuasive adversarial prompts that induce unsafe behaviors in
LLMs. However, their method lacks a formal definition or analysis of persuasive behavior and its
impact. Bayesian persuasion [30, 29] is a symmetric information framework that utilizes to influence
beliefs by strategically sharing information aligned with motivations, aiding decision-making tasks
[22]. While its application to language models remains unexplored, [49] suggested it could align
AI during deployment by tailoring information based on different scenarios. To the best of our
knowledge, we are the first to use dialogue to apply Bayesian persuasion to LLMs and enhance their
capabilities.
**Eliciting Latent Knowledge. Christiano et al. [15], Hobbhahn [24] introduced a theoretical frame-**
work termed Eliciting Latent Knowledge (ELK), designed to extract latent knowledge from models
to assess whether AI systems align with human intents. This framework includes aspects such as
honesty analysis [20], knowledge elicitation [9, 38], and general task knowledge acquisition [10].
Our framework utilizes advisor persuasion to elicit strong models’ latent knowledge for solving
difficult tasks and has the potential for honesty analysis.
Typical ELK methods struggle to train models to report true beliefs rather than just aligning with
human preferences. This issue arises because both strategies yield identical training losses, as they
produce the same answers to training inputs [24]. As a result, models tend to align with human
expectations instead of reporting their own beliefs. Our method addresses this by decoupling the
training objective into a reporting objective and a persuasion objective, focusing on optimal signaling
rather than human-evaluated ground truth.
-----
**3** **Bayesian Persuasion Alignment**
In this section, we formally introduce the proposed persuasion framework. We begin by establishing
the notations and outlining the persuasion protocol, followed by defining the overall objective and
conducting a theoretical analysis of regret.
**3.1** **Protocol and Notations**
We introduce the persuasion protocol wherein an Advisor (small model) persuades a Receiver (large
model) to improve its response to a given input. For each input x, a finite set Cx contains all associated
information items (i.e., state) with input x. The Receiver’s utility function u(x, c, y) is continuous
and dependent on its response y ∈Y to the input x ∈X and the associated information item
_c_ _x. Similarly, the Advisor has a continuous utility function v(x, c, y), which is contingent on the_
_∈C_
Receiver’s response, input, and associated information item. Importantly, in our settings, the Advisor
lacks knowledge of the Receiver’s utility function. For each input x, the Advisor and Receiver start
with a shared prior µ[0]x
_G and a family of distributions[∈]_ [int][(∆(] π[C]x[x]([))]·|c[2])[. The signaling strategy], c ∈Cx over G. This strategy is implemented through a neural[ π][ is defined by a finite realization space]
network with parameters θ. The Advisor sends a signal, and the Receiver observes the chosen signal
realization g ∈G (with |G| < ∞).
The game timing in persuasion is as follows: The Advisor commits to a signaling strategy πθ and
announces it to the Receiver. For a given input x, the Advisor observes an information item sampled
from µ[0]x [and sends a signal][ g][ to the Receiver. Upon receiving this signal, the Receiver updates its]
belief about information item in Cx, forming posterior distribution µ[g]x [via Bayes’s rule. The Receiver]
then chooses a response from the response set, which is defined by
_y[∗](µ[g]x[) = arg max]_ E
_y∈Y_ _c∼µ[g]x_
_u(x, c, y)_ _._ (1)
i
The solution of the game is the problem of optimal signaling strategy design from the Advisor’s point
of view. Taking the Receiver’s response as given, the Advisor selects a signaling strategy πθ that
maximizes its expected utility. Since the responses are generated only by the Receiver, the Advisor
cannot directly influence the information set. Instead, the Advisor can leverage its informational
advantage concerning the information item to influence the Receiver indirectly by way of signaling,
thereby persuading the Receiver to generate improved responses.
**3.2** **Signaling Strategy and Belief Update**
A signaling strategy of the Advisor generates a distribution over G, which is the signal realization
space. Formally, the signaling strategy πθ comprises a function πx : C → ∆(G) for each input x ∈X .
Upon observing an information item c, the Advisor sends a signal sampled from πx(c), where the
input is x, and πx(g, c) denotes the probability of g within this distribution.
_∈G_
The signal space G is broadly construed, including some uninformative signaling strategies. For
instance, the Advisor may send the same signal regardless of the information item c, such that
_πx(c) = πx(c[′]) for all c, c[′]_ _x. Without loss of generality, we assume that signals in_ are
_∈C_ _G_
perceived as distinct by the Receiver.
Upon receiving a signal g, the Receiver updates its posterior belief regarding the information items.
The conditional probability of the information item being c is defined as:
_µx(c)πx(g_ _c)_
Pr(c _g, πx) =_ _|_ (2)
_|_ _c[′]_ _x_ _[µ][x][(][c][′][)][π][x][(][g][|][c][′][)]_ _[.]_
_∈C_
P
The derivation of the Receiver’s posterior belief also depends on its knowledge of the signaling
strategy πθ. In accordance with the Bayesian persuasion framework, the Advisor commits to a
signaling strategy πθ at the beginning of the process and announces it to the Receiver.
**3.3** **Signaling Strategy Optimization**
From the Advisor’s perspective, the objective is to identify a signaling strategy πθ that maximizes the
Advisor’s expected utility, thereby inducing superior responses from the Receiver. Accordingly, the
2int(X) denotes the interior of the set X and ∆(X) represents the set of all probability distributions on X
-----
signaling strategy is optimized by minimizing the following loss function:
Pr(c _g, πx) v(x, c, y)_ _,_ (3)
_|_
_c_ _x_
h X∼C i
(θ) = E
_L_ _−_ _x_
_∈X_
where y is the Receiver’s response to input x as determined by equation (1).
**3.4** **Regret Analysis**
Although the Bayesian Persuasion Alignment framework we propose is practically appealing, a
pivotal theoretical question arises: How can we ensure that this framework robustly learns the
_optimal signaling strategy over time? Specifically, it is crucial to demonstrate that the Advisor_
can gradually find the most persuasive signaling strategy through its interactions with the Receiver.
This convergence guarantee is essential for our framework. To address this question, we draw
inspiration from the online Bayesian persuasion setting [11, 6] and analyze the performance of our
algorithm from an online learning perspective. We introduce the concept of regret, which quantifies
the utility difference between the algorithm’s performance and the optimal strategy over a certain
period. Demonstrating that the algorithm’s regret grows sublinearly with time would imply that it can
progressively converge to the optimal strategy.
Without loss of generality, we focus on signaling schemes that are both direct and persuasive [3],
according to the revelation principle. In a direct signaling scheme, the signals directly correspond to
response recommendations for the Receiver. Moreover, a signaling scheme is considered persuasive
if it incentives the Receiver to follow the response recommendations provided by the Advisor. Let P
denote the set of direct and persuasive signaling schemes, where each element ϕ ∈P is a mapping
_ϕ : C →_ ∆(Y). To simplify the notation, we omit x in subsequent analysis. With the definition of
the set of persuasive signaling schemes P, the Advisor’s expected utility under a signaling scheme
_ϕ ∈P can be expressed as follows:_
_µcϕc(y)v(c, y),_ (4)
_yX∈Y_
_v(ϕ) :=_
_c∈C_
where µc represents the prior probability of information item c, ϕc(y) denotes the probability that the
signaling scheme ϕ recommends response y under information item c, and v(c, y) is the Advisor’s
utility when the Receiver takes response y under information item c.
Next, we introduce a linear mapping f that maps each signaling scheme ϕ ∈P to a point in the R[|Y|]
space. Specifically, for each ϕ ∈P, we define
_f_ (ϕ) := [−v(ϕ, y)]y∈Y, (5)
where v(ϕ, y) = _c_ _[µ][c][ϕ][c][(][y][)][v][(][c, y][)][ represents the Advisor’s expected utility when the Receiver]_
_∈C_
takes response y under signaling scheme ϕ.
[P]
Intuitively, the mapping f represents each signaling scheme as a |Y|-dimensional vector, where each
component −v(ϕ, y) represents the negative of the Advisor’s expected utility for response y. This
representation embeds the signaling schemes into a Euclidean space that directly corresponds to the
Receiver’s response space. Furthermore, we examine the convex hull of the graph of f, denoted as
co f (P). Each point within co f (P) corresponds to a convex combination of signaling schemes.
Formally, the Advisor’s regret at round T is defined as:
_v(ϕt)._ (6)
_t=1_
X
_RT := max_
_ϕ∈P_
_v(ϕ) −_
_t=1_
X
To analyze the regret bound of our persuasion framework, we present the theoretical version of our
algorithm in Algorithm 1.
-----
**Algorithm 1 Theoretical Persuasion Algorithm**
**Require: Set of information items C, set of responses Y, prior distribution µ[0], Advisor’s utility**
function v, regret minimizer R for the set co f (P)
1: for t = 1, . . ., T do
2: co f ( ) _zt_ _.RECOMMEND()_
_P_ _∋_ _←R_
3: (ϕt[(][i][)][, λ]t[(][i][)][)][}]i [m+1] _▷_ Caratheodory’s Theorem
_{_ _∈_ _[←]_ [DECOMPOSE][(][z][t][, f] [(][P][))]
4: Sample it ∈ [m + 1] with probabilities λ[(1)]t _[, . . ., λ]t[(][m][+1)]_
5: Let ϕt _ϕt[(][i][t][)]_
6: Observe information item ← _ct_ _µ[0]_
7: Select and play action yt _ϕ ∼t(_ _ct)_
8: _.OBSERVE(v(ct, yt)) ∼_ _·|_
_R_
9: end for
The key idea behind this algorithm is to maintain a regret minimizer R over the possible signaling
strategies, represented by the convex hull of the set of posterior distributions co f (P). At each round t,
the algorithm obtains a recommended strategy zt from R and decomposes it into a convex combination
of extreme points (ϕt[(][i][)][, λ]t[(][i][)][)][}]i [m+1] [using Caratheodory’s Theorem [][18][]. The algorithm then]
_{_ _∈_
samples an index it according to the weights λ[(]t[i][)] and plays the corresponding signaling strategy
_ϕt = ϕ[(]t[i][t][)]. Upon observing the realized information item ct and the Advisor’s utility v(ct, yt) for_
the chosen reponse yt, the algorithm updates the regret minimizer with this feedback.
Under this theoretical algorithm, we can derive the following regret bound.
**Theorem 1. Algorithm 1 guarantees regret RT = O(m[3][/][2][√]T log T** ), where m = |Y| is the number
_of receiver’s reponses._
The regret bound presented in Theorem 1 demonstrates that our algorithm achieves sublinear regret
over the time horizon T, with a dependence on the size of Receiver’s response space m. The output
space of LLMs is theoretically infinite, as they can generate text of arbitrary length. However, each
response’s length is practically limited. Additionally, responses with same semantics are considered
equivalent given a specific input. Therefore, the Receiver’s response space can be regarded as finite,
aligning well with the assumptions in our theoretical analysis. Although there are differences between
the theoretical algorithm and its practical implementation, the core principle of learning the optimal
signaling strategy through interaction remains consistent. This consistency provides a theoretical
guarantee for the algorithm’s performance and demonstrates its effectiveness in learning the optimal
signaling strategy over time.
**4** **Experiments**
In this section, we evaluate the effectiveness of our persuasion framework on mathematical problems
and code generation. Our evaluation aims to address the following key questions:
**(1) Can our framework enhance the Receiver’s performance in various tasks? (Section 4.2.1)**
**(2) Can our framework find a non-trivial signaling strategy? (Section 4.2.2)**
**(3) How about the efficiency of the proposed framework? (Section 4.2.4)**
Furthermore, we investigate the generalization of the signaling strategy across different Receivers
(Section 4.2.1), across varying difficulties (Section 4.2.3), and for various tasks (Appendix B.1).
Details on experiments are provided in Appendix A.
**4.1** **Settings**
**Implementation Details. We train the Advisor for math-solving tasks using the training datasets**
from GSM8K and MATH, and for the code generation task using the training dataset from MBPP. We
construct an information set for each input using Llama3-8B-Instruct [3] for all datasets. Specifically,
each input includes seven information items, each emphasizing different key aspects essential for
problem-solving. Further details on information set construction are provided in Appendix A.1. In
3https://github.com/meta-llama/llama3
-----
Table 1: Performance of various Receivers under persuasion. We report the accuracy on GSM8K
and MATH, and the pass@1 score on HumanEval across four information structures. "Posterior
Information" refers to sampling the information item from the posterior distribution, influenced by
the Advisor. The Advisor for math tasks differs from that for code generation tasks. Arrows indicate
performance improvements relative to the prior distribution.
No All Prior **Posterior Information**
Task Receiver
Information Information Information
Advisor (GPT-2) Advisor (Phi-2)
Phi-2 56.0 41.0 56.8 59.1 62.1
Mistral-7B 34.3 48.0 45.7 50.4 53.8
Llama2-7B 15.1 36.6 27.2 34.5 45.6
Llama2-7B-Chat 21.8 31.8 37.3 40.0 50.0
Llama2-13B 25.2 38.9 36.2 38.9 45.9
Llama2-13B-Chat 33.9 37.3 36.1 37.9 39.2
Llama3-8B 47.6 54.0 53.7 56.0 62.6
Llama3-8B-Instruct 73.5 72.2 72.3 74.5 75.4
Vicuna-7B 14.9 19.9 29.9 35.1 43.2
Vicuna-13B 23.0 24.8 35.0 43.9 49.2
Vicuna-33B 43.2 44.1 47.8 53.1 58.5
Average (accuracy) 35.3 40.8 43.5 47.6 (9.5% ↑) 53.2 (22.5% ↑)
Phi-2 10.1 11.6 11.5 13.9 15.4
Mistral-7B 6.4 10.3 7.9 9.3 10.8
Llama2-7B 4.1 9.5 6.3 8.6 10.3
Llama2-7B-Chat 4.6 7.8 6.0 8.0 10.4
Llama2-13B 4.5 9.7 7.7 9.6 11.4
Llama2-13B-Chat 5.2 9.8 7.3 9.2 10.5
Llama3-8B 11.0 16.1 12.8 15.9 16.0
Llama3-8B-Instruct 18.1 18.6 18.1 18.9 19.7
Vicuna-7B 3.8 10.1 6.7 8.8 10.5
Vicuna-13B 3.8 11.1 6.7 9.5 11.0
Vicuna-33B 6.8 13.1 9.3 11.2 13.4
Average (accuracy) 7.1 11.6 9.1 11.2 (22.6% ↑) 12.7 (39.0% ↑)
Phi-2 45.7 35.1 39.6 45.7 49.4
Mistral-7B 28.3 32.4 31.2 33.2 35.4
Llama2-7B 12.2 16.2 14.4 16.2 18.3
Llama2-7B-Chat 13.4 16.3 12.6 15.6 22.6
Llama2-13B 17.1 17.7 16.5 19.5 21.3
Llama2-13B-Chat 19.5 17.2 16.1 17.4 22.0
Llama3-8B 27.4 23.8 24.6 31.7 37.2
Llama3-8B-Instruct 56.7 48.1 53.2 56.3 59.5
CodeLlama-7B 31.1 30.5 28.9 31.2 32.9
CodeLlama-13B 36.5 39.0 35.4 40.8 40.2
CodeLlama-34B 48.7 45.1 41.8 49.7 53.2
Average (pass@1) 30.6 29.2 28.6 32.5 (13.7% ↑) 35.6 (24.7% ↑)
GSM8K
(8-shot)
MATH
(4-shot)
HumanEval
(0-shot)
practical implementation, the Advisor’s utility function is defined as the logarithm of the probability of
generating the correct answer, while the Receiver’s utility function is u(x, c, y) = P (y|x, c), naturally
aligning the model’s inherent mechanism with the Receiver’s behavior. A detailed description and
intuition for the setup of the utility function are provided in Appendix A.2.
**Datasets. For a comprehensive evaluation of the ability, we select two kinds of tasks: math problems**
and code generation.
- GSM8K [17] is high-quality linguistically diverse grade school math word problems created by
human problem writers, which contains 7.5k training problems and 1k test problems.
- MATH [23] is a dataset of challenging competition mathematics problems, which is segmented
into 7.5k training problems and 5k testing problems.
- HumanEval [12] is a code evaluation benchmark consisting of 164 original programming questions,
assessing language comprehension, algorithms, and basic mathematics, with some questions
equivalent to simple software interview questions.
- MBPP [4] consists of approximately 1k crowdsourced Python programming problems, covering
basic programming knowledge, standard library functionalities, etc. This dataset is only used for
the training of the Advisor models.
-----
**Advisor and Receiver. In our persuasion framework, we employ two models: an Advisor (small**
model) and a Receiver (large model). For the Advisor, we select two well-known open-source small
models: GPT-2 [39] (124M) and Phi-2 [25] (2.7B). To broadly investigate the generalization of the
proposed method across various models, we consider several large models as Receivers: Phi-2 [25]
(2.7B), Llama-2 [46] (7B, 13B), Llama-3 (8B), CodeLlama [41] (7B, 13B, 34B), Vicuna [13] (7B,
13B, 33B), and Mistral [28] (7B).
**Evaluation Metrics. For the math problems, we determine accuracy by extracting the last number**
from the generated responses and comparing it directly to the ground truth. For the code generation
tasks, our evaluation focuses on assessing the functional correctness of LLM-synthesized code. We
use the unbiased version of the pass@1 [12] setting for both tasks, namely only generating one result
per round. In practice, we use the tool chain-of-thought-hub[4] and DeepSeek-Coder[5] to perform the
evaluation process for math and code generation tasks, respectively.
**Evaluation Settings. For any given input, there is a corresponding set of information item, each item**
of which is related to the input. In our experiments, we examine four information structures. Given
the specific input, the Receiver may observe: (1) No Information items, (2) All Information items, (3)
an item sampled from the Prior Information distribution, or (4) an item sampled from the Posterior
_Information distribution. Naturally, the variation in information structure has an impact on the quality_
of the Receiver’s response.
**4.2** **Results**
We evaluate the Advisor with various Receiver models to investigate the effectiveness of its signaling
strategy on math problem-solving and code generation tasks. Table 1 shows the Advisor improves
the performance of various models through persuasion instead of training. Additional experiments
demonstrate that our persuasion framework can identify a non-trivial signaling strategy, which
exhibits superior performance in terms of efficiency and generalization.
**4.2.1** **Performance on Persuasion**
To investigate the effectiveness of our persuasion framework, we conduct an experimental evaluation
of the Receiver’s behavior under prior information distribution and posterior information distribution.
Table 1 illustrates that Advisor can significantly improve different Receiver’s performance across
a variety of tasks. Comparing the Receiver’s performance without additional information to that
with prior information, we find that additional information enhances the Receiver’s performance.
From the perspective of persuasion, prior and posterior distributions share the same information set.
Instead of training, the Advisor (GPT-2) can significantly enhance the performance of various models,
achieving an average improvement of 9.5% on GSM8K, 22.6% on MATH, and 13.7% on HumanEval.
When considering the increment in the model parameters for the Advisor, a larger one (Phi-2) enables
significant enhancements, with an average improvement of 22.5% on GSM8K, 39.0% on MATH,
and a 24.7% increase on HumanEval. One important observation can be noticed: a good signaling
strategy by the Advisor can effectively persuade different Receivers.
**4.2.2** **Impact on Information Structure**
In our experiment, we also analyze the impact of different information structures on the Receiver.
In the persuasion process, the receiver combines information items from the information set with
input to generate a response. From the perspective of prompt engineering, we evaluate the quality of
responses when the receiver either disregards information items or considers all information items,
to demonstrate the effect of information selection. For ‘No Information’, it serves as a baseline,
equivalent to standard performance testing for LLMs. As shown in Table 1, when the Receiver
can access all information items, its performance improves. However, it is noteworthy that for
some models, using all information items results in minimal gains or even a decline in performance
compared to not using the information. It can be explained that providing too much information
disperses the model’s attention and risks exceeding the model’s maximum window length.
**4.2.3** **Easy-to-Hard Generalization**
In the extended evaluation, we investigate the generalizability of the Advisor. The results presented
in Table 1 demonstrate that the Advisor’s signaling strategy is effective across various Receiver,
4https://github.com/FranxYao/chain-of-thought-hub/tree/main
5https://github.com/deepseek-ai/DeepSeek-Coder
-----
Vicuna-7B Vicuna-33B
No Information All Information Prior Information Advisor (GPT-2) Advisor (Phi-2)
Llama-3-8B
Llama-3-8B-Instruct
Llama-2-7B-Chat
Mistral-7B
Llama-2-7B
Phi-2
Llama-2-13B-Chat
Vicuna-13B
Llama-2-13B
Vicuna-7B Vicuna-33B
Llama-3-8B
Llama-3-8B-Instruct
Llama-2-7B-Chat
Mistral-7B
Llama-2-7B
Phi-2
Llama-2-13B-Chat
Vicuna-13B
Llama-2-13B
Vicuna-7B Vicuna-33B
(a) MATH Level 1-3 (easy)
(b) MATH Level 4-5 (hard)
Figure 2: Performance of Receiver on easy and hard problems. The Advisor (GPT-2 and Phi-2) is
trained on easy problems of training set (MATH level 1-3), and we observe that the capability of the
Receiver greatly improved on both easy and hard tasks with the persuasion signal of the Advisor. In
both subfigures, our method outperforms scenarios where no information or only prior information is
given, and it even surpasses scenarios where all information is provided for most Receiver models.
confirming its broad applicability. Following Sun et al. [44], we evaluate our framework’s Easy-to_Hard Generalization, which is defined as the ability to address hard tasks by training on simpler ones._
We train our Advisor on easy problems (levels 1-3) from the MATH training dataset and assess their
effectiveness in persuading various models on both easy (levels 1-3) and complex problems (levels
4-5) of the MATH test dataset. As shown in Figure 2, we observe that advisors not only enhance the
performance of various receiver models on easy problems but also improve their performance on
hard problems, which are only trained exclusively with supervision on easy problems.
**4.2.4** **Efficiency on Persuasion**
500
400
55
50
The efficiency of our framework lies in two aspects. On one hand, a well-trained Advisor
can persuade various models to elicit better responses. On the other hand, during the inference
stage, our method achieves enhanced Receiver’s
performance with fewer input tokens. To better understand the relationship between performance improvement and prompt length, we design the Average Relative Performance Improvement (ARPI) metric to measure the improvement in performance relative to the increase in
prompt length.
**Average Relative Performance Improvement**
**(ARPI). To compare the performance of a spe-**
cific information structure with ‘No Information’
across several receivers, let R(A) represent the
performance of structure A, and let L denote the
length of the input prompt tokens. We define
ARPI(A|B) as follows:
ARPI(A _B) = [1]_
_|_ _N_
300
200
45
40
100
35
|Col1|Col2|ARPI|0.|35|
|---|---|---|---|---|
||0.|30 0.|29||
||||||
||||||
|0.|02||||
ARPI
0.35
0.30
0.29
0.02
All Prior GPT-2 Phi-2
Figure 3: Average Relative Accuracy Improvement
on GSM8K. We compare two posterior structure
from Advisor with ‘All Information’ and ‘Prior Information’. The left y-axis represents the increase
in prompt token length relative to the ‘No Information’, while the right y-axis displays the average
accuracy across various models on GSM8K.
_N_
_R(A)_ _R(B)_
ARPI(A _B) = [1]_ _−_ (7)
_|_ _N_ _L(A)_ _L(B)_ _[.]_
_i=1_ _−_
X
ARPI(A|B) presents the relative performance difference of structure A compared with structure
_B. Figure 3 shows that when the Receiver uses all available information to generate responses, it_
improves performance relative to using no information, but this results in a 26% increase in the length
_i=1_
-----
of input tokens, thereby increasing the computational cost of inference. In contrast, utilizing our
persuasion framework, Phi-2 increases the input token length by only 6.9% while achieving a 22.5%
performance improvement, leading to a higher efficiency ratio.
**5** **Conclusion**
In this work, we introduce Bayesian Persuasion Alignment, a novel framework that integrates
the concept of Bayesian persuasion with AI alignment. By formulating alignment as a Bayesian
Persuasion problem, we employ a smaller model as an Advisor to generate signals that persuade
a larger model, the Receiver, to enhance its performance. Our experimental results demonstrate
significant improvements in the performance of various large models on mathematical problemsolving tasks and code generation tasks. The theoretical analysis provides an upper bound on the
Advisor’s regret, highlighting the efficacy of our method in learning the optimal signaling strategy.
We hope our approach will inspire future research in integrating information design with alignment,
contributing to the development of more efficient and effective AI systems.
**Limitations. The effectiveness of persuasion depends on the signaling strategy and is also influenced**
by the inherent capabilities of the Receiver. If the model itself lacks the ability to complete a certain
task, our method may not be effective, which limits the applicability of our framework.
**References**
[1] Jacob D Abernethy, Elad Hazan, and Alexander Rakhlin. Competing in the dark: An efficient
algorithm for bandit linear optimization. In Conference on Learning Theory (COLT), pages
263–274. Citeseer, 2008.
[2] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Mané.
Concrete problems in ai safety. arXiv preprint arXiv:1606.06565, 2016.
[3] Itai Arieli and Yakov Babichenko. Private bayesian persuasion. Journal of Economic Theory,
182:185–217, 2019.
[4] Jacob Austin, Augustus Odena, Maxwell I. Nye, Maarten Bosma, Henryk Michalewski, David
Dohan, Ellen Jiang, Carrie J. Cai, Michael Terry, Quoc V. Le, and Charles Sutton. Program
synthesis with large language models. CoRR, abs/2108.07732, 2021.
[5] Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones,
Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai:
Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022.
[6] Martino Bernasconi, Matteo Castiglioni, Andrea Celli, Alberto Marchesi, Francesco Trovò,
and Nicola Gatti. Optimal rates and efficient algorithms for online bayesian persuasion. In
_International Conference on Machine Learning (ICML), pages 2164–2183. PMLR, 2023._
[7] Samuel R Bowman, Jeeyoon Hyun, Ethan Perez, Edwin Chen, Craig Pettit, Scott Heiner, Kamile˙
Lukošiut¯ e, Amanda Askell, Andy Jones, Anna Chen, et al. Measuring progress on scalable˙
oversight for large language models. arXiv preprint arXiv:2211.03540, 2022.
[8] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal,
Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models
are few-shot learners. Advances in Neural Information Processing Systems (NeurIPS), 33:
1877–1901, 2020.
[9] Collin Burns, Haotian Ye, Dan Klein, and Jacob Steinhardt. Discovering latent knowledge in
language models without supervision. arXiv preprint arXiv:2212.03827, 2022.
[10] Collin Burns, Pavel Izmailov, Jan Hendrik Kirchner, Bowen Baker, Leo Gao, Leopold Aschenbrenner, Yining Chen, Adrien Ecoffet, Manas Joglekar, Jan Leike, et al. Weak-to-strong generalization: Eliciting strong capabilities with weak supervision. arXiv preprint arXiv:2312.09390,
2023.
[11] Matteo Castiglioni, Andrea Celli, Alberto Marchesi, and Nicola Gatti. Online bayesian persuasion. Advances in Neural Information Processing Systems (NeurIPS), 33:16188–16198,
2020.
-----
[12] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Pondé de Oliveira Pinto, Jared
Kaplan, Harrison Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul
Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke
Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad
Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias
Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex
Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain,
William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Joshua Achiam, Vedant
Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie
Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and
Wojciech Zaremba. Evaluating large language models trained on code. CoRR, abs/2107.03374,
2021.
[13] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng,
Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna:
An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL
[https://lmsys.org/blog/2023-03-30-vicuna/.](https://lmsys.org/blog/2023-03-30-vicuna/)
[14] Paul Christiano, Buck Shlegeris, and Dario Amodei. Supervising strong learners by amplifying
weak experts. arXiv preprint arXiv:1810.08575, 2018.
[15] Paul Christiano, Mark Xu, and Ajeya Cotra. Arc’s first technical report: Eliciting latent knowledge. [https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/](https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge)
[arc-s-first-technical-report-eliciting-latent-knowledge, 2021.](https://www.alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge)
[16] Paul F Christiano, Jan Leike, Tom Brown, Miljan Martic, Shane Legg, and Dario Amodei. Deep
reinforcement learning from human preferences. Advances in Neural Information Processing
_Systems (NeurIPS), 30, 2017._
[17] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser,
Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John
Schulman. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021.
[18] WD Cook and RJ Webster. Caratheodory’s theorem. Canadian Mathematical Bulletin, 15(2):
293–293, 1972.
[19] Esin Durmus, Liane Lovitt, Alex Tamkin, Stuart Ritchie, Jack Clark, and Deep Ganguli.
[Measuring the persuasiveness of language models, 2024. URL https://www.anthropic.](https://www.anthropic.com/news/measuring-model-persuasiveness)
[com/news/measuring-model-persuasiveness.](https://www.anthropic.com/news/measuring-model-persuasiveness)
[20] Owain Evans, Owen Cotton-Barratt, Lukas Finnveden, Adam Bales, Avital Balwit, Peter Wills,
Luca Righetti, and William Saunders. Truthful ai: Developing and governing ai that does not
lie. arXiv preprint arXiv:2110.06674, 2021.
[21] Brian J Fogg. Persuasive technology: using computers to change what we think and do. Ubiquity,
2002(December):2, 2002.
[22] Jiarui Gan, Rupak Majumdar, Goran Radanovic, and Adish Singla. Bayesian persuasion in
sequential decision-making. In Proceedings of the AAAI Conference on Artificial Intelligence
_(AAAI), volume 36, pages 5025–5033, 2022._
[23] Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn
Song, and Jacob Steinhardt. Measuring mathematical problem solving with the math dataset. In
_Neural Information Processing Systems Track on Datasets and Benchmarks, volume 1, 2021._
[24] Marius Hobbhahn. Eliciting latent knowledge (elk) - distillation/summary.
[https://www.alignmentforum.org/posts/rxoBY9CMkqDsHt25t/](https://www.alignmentforum.org/posts/rxoBY9CMkqDsHt25t/eliciting-latent-knowledge-elk-distillation-summary)
[eliciting-latent-knowledge-elk-distillation-summary, 2022.](https://www.alignmentforum.org/posts/rxoBY9CMkqDsHt25t/eliciting-latent-knowledge-elk-distillation-summary)
[25] Mojan Javaheripi, Sébastien Bubeck, Marah Abdin, Jyoti Aneja, Sebastien Bubeck, Caio
César Teodoro Mendes, Weizhu Chen, Allie Del Giorno, Ronen Eldan, Sivakanth Gopi, et al.
Phi-2: The surprising power of small language models. Microsoft Research Blog, 2023.
-----
[26] Jiaming Ji, Tianyi Qiu, Boyuan Chen, Borong Zhang, Hantao Lou, Kaile Wang, Yawen Duan,
Zhonghao He, Jiayi Zhou, Zhaowei Zhang, et al. Ai alignment: A comprehensive survey. arXiv
_preprint arXiv:2310.19852, 2023._
[27] Jiaming Ji, Boyuan Chen, Hantao Lou, Donghai Hong, Borong Zhang, Xuehai Pan, Juntao Dai,
and Yaodong Yang. Aligner: Achieving efficient alignment through weak-to-strong correction.
_arXiv preprint arXiv:2402.02416, 2024._
[28] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh
Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile
Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023.
[29] Emir Kamenica. Bayesian persuasion and information design. Annual Review of Economics,
11:249–272, 2019.
[30] Emir Kamenica and Matthew Gentzkow. Bayesian persuasion. American Economic Review,
101(6):2590–2615, 2011.
[31] Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop,
Victor Carbune, and Abhinav Rastogi. Rlaif: Scaling reinforcement learning from human
feedback with ai feedback. arXiv preprint arXiv:2309.00267, 2023.
[32] Yuejiang Liu and Alexandre Alahi. Co-supervised learning: Improving weak-to-strong generalization with hierarchical mixture of experts. arXiv preprint arXiv:2402.15505, 2024.
[33] SC Matz, JD Teeny, Sumer S Vaid, H Peters, GM Harari, and M Cerf. The potential of generative
ai for personalized persuasion at scale. Scientific Reports, 14(1):4692, 2024.
[34] Yurii Nesterov and Arkadii Nemirovskii. Interior-point polynomial algorithms in convex
_programming. SIAM, 1994._
[35] OpenAI. Gpt-4 technical report, 2023.
[36] OpenAI. Introducing superalignment. [https://openai.com/blog/](https://openai.com/blog/introducing-superalignment)
[introducing-superalignment, 2023. Accessed on July 5, 2023.](https://openai.com/blog/introducing-superalignment)
[37] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin,
Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to
follow instructions with human feedback. Advances in Neural Information Processing Systems
_(NeurIPS), 35:27730–27744, 2022._
[38] Lorenzo Pacchiardi, Alex J Chan, Sören Mindermann, Ilan Moscovitz, Alexa Y Pan, Yarin Gal,
Owain Evans, and Jan Brauner. How to catch an ai liar: Lie detection in black-box llms by
asking unrelated questions. arXiv preprint arXiv:2309.15840, 2023.
[39] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al.
Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
[40] Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and
Chelsea Finn. Direct preference optimization: Your language model is secretly a reward
[model. In Advances in Neural Information Processing Systems (NeurIPS), 2023. URL https:](https://openreview.net/forum?id=HPuSIXJaa9)
[//openreview.net/forum?id=HPuSIXJaa9.](https://openreview.net/forum?id=HPuSIXJaa9)
[41] Baptiste Roziere, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat, Xiaoqing Ellen Tan,
Yossi Adi, Jingyu Liu, Tal Remez, Jérémy Rapin, et al. Code llama: Open foundation models
for code. arXiv preprint arXiv:2308.12950, 2023.
[42] William Saunders, Catherine Yeh, Jeff Wu, Steven Bills, Long Ouyang, Jonathan Ward, and Jan
Leike. Self-critiquing models for assisting human evaluators. arXiv preprint arXiv:2206.05802,
2022.
[43] Mrinank Sharma, Meg Tong, Tomasz Korbak, David Duvenaud, Amanda Askell, Samuel R
Bowman, Newton Cheng, Esin Durmus, Zac Hatfield-Dodds, Scott R Johnston, et al. Towards
understanding sycophancy in language models. arXiv preprint arXiv:2310.13548, 2023.
-----
[44] Zhiqing Sun, Longhui Yu, Yikang Shen, Weiyang Liu, Yiming Yang, Sean Welleck, and Chuang
Gan. Easy-to-hard generalization: Scalable alignment beyond human supervision. CoRR,
abs/2403.09472, 2024.
[45] Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy
Liang, and Tatsunori B Hashimoto. Stanford alpaca: An instruction-following llama model,
2023.
[46] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei,
Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open
foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023.
[47] Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou
Yu. Persuasion for good: Towards a personalized persuasive dialogue system for social good.
_arXiv preprint arXiv:1906.06725, 2019._
[48] Yi Zeng, Hongpeng Lin, Jingwen Zhang, Diyi Yang, Ruoxi Jia, and Weiyan Shi. How johnny
can persuade llms to jailbreak them: Rethinking persuasion to challenge ai safety by humanizing
llms. arXiv preprint arXiv:2401.06373, 2024.
[49] Zhaowei Zhang, Fengshuo Bai, Mingzhi Wang, Haoyang Ye, Chengdong Ma, and Yaodong
Yang. Incentive compatibility for ai alignment in sociotechnical systems: Positions and prospects.
_arXiv preprint arXiv:2402.12907, 2024._
-----
**A** **Experimental Details**
In this section, we detail the implementation, including the methodology for constructing information
sets, the basic settings for training models, and the prompts utilized in the experiments.
**A.1** **Information Set Construction**
As the dataset itself does not contain information sets, we employ the open-source model Llama3-8BInstruct [(6)] to rapidly construct the information set for each question within all dataset. Specifically,
during model inference, we set the temperature to 0.7 and the top-p to 0.9. Specifically, for mathematical problem-solving tasks, we generate information across seven aspects: Known Data, Objective,
Methodology, Procedure Summary, Solution Verification, Assumptions, and Error Analysis. For code
generation tasks, the generated aspects include Input and Output, Problem Statement, Test Cases,
Logical Deduction, Algorithm Complexity, Error Handling, Edge Cases, and Code Structure. The
entire process of information set construction is automated, with the decision on which aspects to
generate also being determined by the LLM. For the prior distribution of information items, we
calculate the conditional probabilities for each information item given a problem, generated by the
model, and normalize these probabilities to form a valid distribution. We provide the complete
prompts used for data generation in Appendix A.4.
A specific example
_<QUESTION>_
On Thursday the Meat Market sold 210kg of ground beef. On
Friday they sold twice that amount. On Saturday they only
sold 130kg. On Sunday they sold half of what they sold
Saturday. If they originally planned to sell only 500kg, how
much meat did they sell beyond their original plans?
_<EXTRA INFORMATION>_
"Known Data": Thursday: 210kg of ground beef sold; Friday:
Twice the amount sold on Thursday, which is 2 x 210kg;
Saturday: 130kg of ground beef sold; Sunday: Half of what
was sold on Saturday, which is 0.5 x 130kg; Original plan:
500kg
"Objective": To calculate how much meat was sold beyond the
original plan
"Methodology": 1. Calculate the total amount of ground beef
sold on Thursday, Friday, Saturday, and Sunday. 2. Calculate
the total amount of ground beef sold beyond the original plan
"Procedure Summary": 1. Add the amount sold on Thursday,
Friday, Saturday, and Sunday: 210 + 420 + 130 + 65 2.
Subtract the original plan from the total amount sold
"Solution Verification": 1. Check if the total amount sold
is equal to the sum of the amounts sold on each day. 2.
Check if the answer makes sense in the context of the problem
"Assumptions": The data provided is accurate and complete.
The calculations are performed correctly
"Error Analysis": Potential errors may occur due to incorrect
calculation or misinterpretation of the data. Double-checking
the calculations and verifying the answer against the given
data can help identify and correct any errors
**A.2** **Utility Function**
In our persuasion, the Receiver’s utility function u(x, c, y) is continuous and dependent on its response
_y_ to the input x and the associated information item c _x. Similarly, the Advisor has_
_∈Y_ _∈X_ _∈C_
a continuous utility function v(x, c, y), which is contingent on the Receiver’s response, input, and
(6)https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct
-----
associated information item. In practical implementation, the utility function of the Advisor is
defined as the logarithm of the probability of generating the correct answer, given the input x and
the information item c. For the utility function u(x, c, y), a natural idea is to set the conditional
probability P (y|x, c) as utility. Autoregressive language model generate responses by continuously
choosing the next token with the highest probability, ultimately producing the response with the
maximum probability. This behavior aligns precisely with the Receiver selecting the optimal response
based on the maximum utility. Therefore, in the experiment, given the input x and information item
_c, the response generated by the Receiver is equivalent to selecting a response from the set (1)._
**A.3** **Training Hyperparameters**
In our experiments, we train two models GPT-2 [39] (124M) and Phi-2 [25] (2.7B), utilizing the
training datasets from GSM8K and MATH for mathematical problem-solving tasks, and MBPP for
code generation tasks. Throughout the training, we employe the AdamW optimizer with hyperparameters set to β1 = 0.9, β2 = 0.95, and ϵ = 10[−][5]. Additionally, We use a cosine learning rate schedule
with a maximum learning rate of 5 × 10[−][5]. All models are trained on 4 NVIDIA A800 GPUs.
**A.4** **Prompts**
In our experiment, we employ distinct prompts at different stages, which included the construction
of the information set for math problems and code generation tasks, the generation of signals by
the Advisor, and the generation of responses by the Receiver. Here, we present the prompts used
throughout our experiment.
**A.4.1** **Math Tasks**
Prompt for the construction of information sets on math problems
_<INSTRUCTION>_
Please provide key information on the following aspects:
1. Known Data: List all numerical values and conditions given in the problem.
2. Objective: Clearly define the specific calculation or problem that needs to be solved.
3. Methodology: Describe the mathematical formulas or logical reasoning required to solve the
problem.
4. Procedure Summary: Outline the solution steps from the given data to the resolution of the
problem.
5. Solution Verification: Suggest methods to verify the correctness of the answer.
6. Assumptions: List any assumptions made to simplify the problem or calculation.
7. Error Analysis: Identify potential errors or mistakes that may occur during the calculation.
Ensure that the information provided is accurate, precise to facilitate the correct solution.
_<QUESTION>_
...
**A.4.2** **Response Generation**
Prompt for Receiver generate a response
_<QUESTION>_
...
_<EXTRA INFORMATION>_
Integrate with the additional context to form a thorough and insightful answer.
{information item}
_<ANSWER>_
Let’s think step by step.
-----
**A.4.3** **Code Generation Tasks**
Prompt for the construction of information sets on code task
_<INSTRUCTION>_
Please provide key information on the following aspects:
1. Input and Output: Clearly specify the function’s parameters and return types.
2. Problem Statement: Understand the problem to be solved and the expected solution.
3. Test Cases: Design test cases based on edge cases and special situations.
4. Logical Deduction: Determine the basic logic for solving the problem based on the
description and examples.
5. Algorithm Complexity: Evaluate the time and space complexity of the designed algorithm.
6. Error Handling: Consider handling potential errors and exceptions.
7. Edge Cases: Identify extreme cases in the problem.
Ensure that the information provided is accurate, precise to facilitate the correct solution.
_<QUESTION>_
...
**A.4.4** **Signal Generation**
Prompt for Advisor generate a signal
_<INSTRUCTION>_
Summarize below information and present the most important details in an accurate and precise
format.
_<EXTRA INFORMATION>_
{all information items}
**A.5** **The Training Code of Persuasion**
The pseudocode below shows the basic training process of our Bayesian persuasion framework.
1 def compute_loss(advisor, inputs):
2 # Step1. smaple a information item
3 sample_infos = ...
4 # Step2. produce signals by Advisor, condition on sample_infos
5 signals_ids = advisor.generate(max_new_tokens=50,)
6 # Step3. update belief on info by bayes rule
7 posterior_belief = update_posterior_belief(prior, info_items,
signals_ids)
8 # Step4. Receiver choose best response
9 item_index = torch.max(posterior_belief, dim=1)
10 ...
11 advisor_utility = get_advisor_utility(receiver, inputs_ids,
sample_infos)
12 # Step5. calculate the loss of Advisor
13 loss = (posterior_belief * advisor_utility).mean()
14 return loss
**B** **Additional Experiments**
**B.1** **Generalization on various tasks**
We investigate the generalization of the signaling strategy across different Receivers in Section 4.2.1
and across varying levels of difficulty in Section 4.2.3. In this section, we evaluate the generalization
of our well-trained signaling strategy across various tasks. Specifically, we train the Advisor using
the GSM8K training dataset and assess its performance on the MATH dataset. Concurrently, we train
the Advisor using the MATH training dataset and assess its performance on GSM8K. As shown in
-----
Table 2: Performance of various Receivers under persuasion. We report the accuracy on GSM8K
and MATH. "Posterior Information" refers to sampling the information item from the posterior
distribution, influenced by the Advisor. The Advisor for math tasks differs from that for code
generation tasks. Arrows indicate performance improvements relative to the prior distribution.
No All Prior **Posterior Information**
Task Receiver
Information Information Information
Advisor (GPT-2) Advisor (Phi-2)
Phi-2 56.0 41.0 56.8 57.3 59.3
Mistral-7B 34.3 48.0 45.7 47.3 51.3
Llama2-7B 15.1 36.6 27.2 33.0 44.0
Llama2-7B-Chat 21.8 31.8 37.3 40.3 48.3
Llama2-13B 25.2 38.9 36.2 41.7 43.7
Llama2-13B-Chat 33.9 37.3 36.1 37.7 37.7
Llama3-8B 47.6 54.0 53.7 54.4 54.4
Llama3-8B-Instruct 73.5 72.2 72.3 74.3 74.3
Vicuna-7B 14.9 19.9 29.9 32.6 39.6
Vicuna-13B 23.0 24.8 35.0 38.3 45.3
Vicuna-33B 43.2 44.1 47.8 50.8 55.8
Average (accuracy) 35.3 40.8 43.5 46.2 (6.2% ↑) 50.3 (15.6% ↑)
Phi-2 10.1 11.6 11.5 13.8 14.8
Mistral-7B 6.4 10.3 7.9 9.1 10.8
Llama2-7B 4.1 9.5 6.3 8.5 10.3
Llama2-7B-Chat 4.6 7.8 6.0 7.9 9.3
Llama2-13B 4.5 9.7 7.7 9.3 10.5
Llama2-13B-Chat 5.2 9.8 7.3 8.8 10.4
Llama3-8B 11.0 16.1 12.8 15.5 16.0
Llama3-8B-Instruct 18.1 18.6 18.1 18.8 19.3
Vicuna-7B 3.8 10.1 6.7 8.7 11.1
Vicuna-13B 3.8 11.1 6.7 9.1 13.0
Vicuna-33B 6.8 13.1 9.3 10.6 12.2
Average (accuracy) 7.1 11.6 9.1 10.9 (19.8% ↑) 12.5 (37.3% ↑)
GSM8K
(8-shot)
MATH
(4-shot)
Table 2, the Advisor is capable of enhancing the performance of all Receivers on both GSM8K and
MATH to varying degrees. When GPT-2 acts as the Advisor, it facilitates performance improvements
for multiple Receivers, with an average performance increase of 6.2% on GSM8K and 19.8% on
MATH. In contrast, Phi-2 achieves more notable performance enhancements, with gains of 15.6% on
GSM8K and 37.3% on MATH.
**C** **Proof of Theorem 1**
**Assumption 1. The prior distribution µ[0]** _is in the interior of ∆(C), i.e., µ[0]c_ _[>][ 0][ for all][ c][ ∈C][.]_
**Definition 1 (Linear Map). A vector-valued function f : X →** R[D] _is said to be linear if there exists_
_a matrix M ∈_ R[D][×][M] _such that f_ (x) = Mx for all x ∈ _X ⊆_ R[M] _._
**Theorem 2 (Caratheodory’s Theorem [18]). Let S ⊆** R[D] _be a set. Then, any point in the convex hull_
_of S can be expressed as a convex combination of at most D + 1 points from S._
**Theorem 3 ([6], Theorem 3.2). If X is a polytope and f is a linear map, then there exist algorithms**
_implementing the Carathéodory decomposition and the inverse map f_ _[†]._
_Proof. If X is a polytope and f is a linear map, then f is also a polytope. By Caratheodory’s theorem,_
every point in f (X) can be expressed as a convex combination of at most D + 1 vertices of f (X),
where D is the dimension of f (X).
Given any z ∈ _f_ (X), the Carathéodory decomposition algorithm finds at most D + 1 vertices
_{z1, . . ., zD+1} ⊆_ _f_ (X) and convex coefficients {λ1, . . ., λD+1} such that z = _i=1_ _[λ][i][z][i][. This]_
can be done by solving a linear program.
For the inverse map, given any z ∈ _f_ (X), we want to find an x ∈ _X such that f_ (x[P]) =[D][+1] z. Since f is
linear, we have f (x) = Mx for some matrix M . Finding x is thus equivalent to solving the linear
system Mx = z. Since z ∈ _f_ (X), this system is guaranteed to have a solution, which can be found
using Gaussian elimination.
-----
**Corollary 1 ([6], Corollary 3.4). Under the assumptions of Theorem 3, there exists a regret minimizer**
_R such that Algorithm 1 guarantees cumulative regret_
_RT_ 16D[3][/][2][p]T log T, (8)
_≤_
_where D is the dimension of f_ (X).
_Proof. To obtain the regret bound, we equip Algorithm 1 with a suitably-defined regret minimizer_
. In particular, works by observing the realized utility v(yt, ct), since the sender does not
_R_ _R_
directly play ϕt, but rather draws an action yt according to ϕt,ct . Such a regret minimizer R can be
implemented by the algorithm introduced by [1], as any polytope in R[D] has a D-self concordant
barrier [34] (Theorem 2.5.1). This yields the stated regret bound [1] (Theorem 1).
With the above theorems and corollary, we are now ready to prove Thereom 1.
_Proof. The proof of this theorem will proceed in three steps._
**Step 1: Show that the set of direct and persuasive signaling schemes P is a polytope.**
To see this, note that P can be described by the following linear constraints:
_ϕc(y) = 1,_ _c_ _,_ (9)
_∀_ _∈C_
_yX∈Y_
_µcϕC(y)(v(y, c) −_ _v(y[′], c)) ≥_ 0, ∀y, y[′] _∈Y,_ (10)
Xc∈C
_ϕc(y)_ 0, _c_ _, y_ _._ (11)
_≥_ _∀_ _∈C_ _∈Y_
Constraint equation 9 ensures that ϕc is a valid probability distribution for each c. Constraint
equation 10 is the persuasiveness constraint. Constraint equation 11 ensures non-negativity. As these
are all linear constraints, P is a polytope.
**Step 2: Define a linear map f : P →** R[m].
Let f : R[m] be defined as f (ϕ) = [ _v(ϕ, y)]y_ for all ϕ, where v(ϕ, y) =
_Y →_ _−_ _∈Y_ _∈P_
_c_ _[µ][c][ϕ][c][(][y][)][v][(][y, c][)][ is the Advisor’s expected utility for action][ y][ under signaling scheme][ ϕ][. We]_
_∈C_
can verify that f is linear:
P
_f_ (αϕ1 + βϕ2) = [ _v(αϕ1 + βϕ2, y)]y_
_−_ _∈Y_
= [−αv(ϕ1, y) − _βv(ϕ2, y)]y∈Y_
= α[ _v(ϕ1, y)]y_ + β[ _v(ϕ2, y)]y_
_−_ _∈Y_ _−_ _∈Y_
= αf (ϕ1) + βf (ϕ2)
for any ϕ1, ϕ2 ∈P and α, β ∈ R.
**Step 3: Apply Corollary 1 to derive the regret bound.**
Since f is a linear map from P to R[m], we have f (P) ⊆ R[m], so the dimension of f (P) is at most m.
By Corollary 1, if the dimension of f (P) is D, then there exists a regret minimizer with regret bound
_RT_ 16D[3][/][2][p]T log T.
_≤_
In our setting, D ≤ _m. Therefore, we get the following regret bound:_
_RT = O(m[3][/][2][p]T log T_ ).
Putting everything together, we have shown that the regret of Algorithm 1 is upper bounded by
_O(m[3][/][2][√]T log T_ ), where m = |Y| is the size of the Receiver’s response space.
-----
| [
"Yaodong, Yang",
"Boyuan, Chen",
"Fengshuo, Bai",
"Mingzhi, Wang",
"Zhaowei, Zhang",
"Ying, Wen",
"Yinda, Xu"
] | 2024-05-28T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2405.18718 | https://arxiv.org/abs/2405.18718 | https://www.semanticscholar.org/paper/718b5f6c0cac99ec93b8c19743be2d3199e8b68c |
Embedding Mathematical Formulas into Vector Space | N/A | null | null | [
"Fraknói, Ádám"
] | 2023-09-01T00:00:00 | null | false | 0 | 0 | null | http://aitp-conference.org/2023/abstract/AITP_2023_paper_17.pdf | null | null |
Enhance Reasoning by Learning from Mistakes: Peer-Review Knowledge Distillation from Multiple Large Language Models | Large language models (LLMs) have exhibited complex reasoning abilities by generating question rationales and demonstrated exceptional performance in natural language processing (NLP) tasks. However, these reasoning capabilities generally emerge in models with tens of billions of parameters, creating significant computational challenges for real-world deployment. Recent research has concentrated on improving open-source smaller models through knowledge distillation (KD) from commercial LLMs. Nevertheless, most of these studies rely solely on the responses from one single LLM as the gold rationale for training. In this paper, we introduce a novel Mistake-Aware Peer-Review Distillation (MAPD) approach: 1) Instead of merely obtaining gold rationales from teachers, our method asks teachers to identify and explain the student's mistakes, providing customized instruction learning data. 2) We design a simulated peer-review process between teacher LLMs, which selects only the generated rationales above the acceptance threshold. This reduces the chance of teachers guessing correctly with flawed rationale, improving instructional data quality. Comprehensive experiments and analysis on mathematical, commonsense, and logical reasoning tasks demonstrate the effectiveness of our method. | This paper introduces a novel Mistake-Aware Peer-Review Distillation (MAPD) approach, which asks teachers to identify and explain the student's mistakes, providing customized instruction learning data. | [
"Zhuochun, Li",
"Yuelyu, Ji",
"Rui, Meng",
"Daqing, He"
] | 2024-10-04T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2410.03663v1 | https://arxiv.org/abs/2410.03663 | https://www.semanticscholar.org/paper/0bbe20cb9e8b95a8991b496057f4c31e02a96bb5 |
|
Enhancing Formal Theorem Proving: A Comprehensive Dataset for Training AI Models on Coq Code | In the realm of formal theorem proving, the Coq proof assistant stands out for its rigorous approach to verifying mathematical assertions and software correctness. Despite the advances in artificial intelligence and machine learning, the specialized nature of Coq syntax and semantics poses unique challenges for Large Language Models (LLMs). Addressing this gap, we present a comprehensive dataset specifically designed to enhance LLMs' proficiency in interpreting and generating Coq code. This dataset, derived from a collection of over 10,000 Coq source files, encompasses a wide array of propositions, proofs, and definitions, enriched with metadata including source references and licensing information. Our primary aim is to facilitate the development of LLMs capable of generating syntactically correct and semantically meaningful Coq constructs, thereby advancing the frontier of automated theorem proving. Initial experiments with this dataset have showcased its significant potential; models trained on this data exhibited enhanced accuracy in Coq code generation. Notably, a particular experiment revealed that a fine-tuned LLM was capable of generating 141 valid proofs for a basic lemma, highlighting the dataset's utility in facilitating the discovery of diverse and valid proof strategies. This paper discusses the dataset's composition, the methodology behind its creation, and the implications of our findings for the future of machine learning in formal verification. The dataset is accessible for further research and exploration: https://huggingface.co/datasets/florath/coq-facts-props-proofs-gen0-v1 | This paper presents a comprehensive dataset specifically designed to enhance LLMs' proficiency in interpreting and generating Coq code, and discusses the dataset's composition, the methodology behind its creation, and the implications for the future of machine learning in formal verification. | # Enhancing Formal Theorem Proving: A Comprehensive Dataset for Training AI Models on Coq Code
Andreas Florath[1]*
**Abstract**
In the realm of formal theorem proving, the Coq proof assistant stands out for its rigorous approach to verifying
mathematical assertions and software correctness. Despite the advances in artificial intelligence and machine
learning, the specialized nature of Coq syntax and semantics poses unique challenges for Large Language
Models (LLMs). Addressing this gap, we present a comprehensive dataset specifically designed to enhance
LLMs’ proficiency in interpreting and generating Coq code. This dataset, derived from a collection of over 10,000
Coq source files, encompasses a wide array of propositions, proofs, and definitions, enriched with metadata
including source references and licensing information. Our primary aim is to facilitate the development of LLMs
capable of generating syntactically correct and semantically meaningful Coq constructs, thereby advancing the
frontier of automated theorem proving.
Initial experiments with this dataset have showcased its significant potential; models trained on this data exhibited
enhanced accuracy in Coq code generation. Notably, a particular experiment revealed that a fine-tuned LLM
was capable of generating 141 valid proofs for a basic lemma, highlighting the dataset’s utility in facilitating the
discovery of diverse and valid proof strategies. This paper discusses the dataset’s composition, the methodology
behind its creation, and the implications of our findings for the future of machine learning in formal verification.
The dataset is accessible for further research and exploration:
https://huggingface.co/datasets/florath/coq-facts-props-proofs-gen0-v1
**Keywords**
Mathematics Formal Proof — Coq — dataset — ML — LLM
1flonatel GmbH & Co. KG, Aachen, Germany
*Corresponding author: [email protected]
**Contents**
8.3 Experiment 3: S (m * n) = m * n + n. . . . . . . . . . 6
Prompt and Challenge • Discussion
**Introduction**
**Objectives**
**9** **Results**
**10** **Outlook**
**Prior Art**
**Data Sources**
**11** **Acknowledgment**
**12** **Appendix**
**Licenses**
**Dataset “coq-facts-props-proofs”**
12.1141 Ways to Proof the Lemma . . . . . . . . . . . . . 8
**1. Introduction**
**7** **Statistics** **4**
7.1 info.parquet . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
7.2 facts.parquet . . . . . . . . . . . . . . . . . . . . . . . . . . 4
7.3 props-proof.parquet . . . . . . . . . . . . . . . . . . . . . 4
**8** **Experiments** **5**
8.1 Experiment 1: n = n + 0 . . . . . . . . . . . . . . . . . . 5
Prompt and Reference Proof • Model Responses • Comparative
Model Responses • Discussion
In the exploration of Large Language Models (LLMs) for code
optimization [1], two significant limitations were identified:
- The dependency on human interaction impedes the
model’s ability to function autonomously, limiting its
applicability to extensive source code collections and
automation processes.
- The indefinite nature of optimization completion, where
a considerable portion of time is allocated to verification rather than the optimization process itself. The
8.2 Experiment 2: 7 + 3 = 10 . . . . . . . . . . . . . . . . . 6
Prompt and Theoretical Proof • Model’s Response • Comparison
with Other Models • Discussion
-----
measurement of optimization efficacy remains a challenge.
The adoption of formal mathematical proofs presents a
logical advancement for overcoming the second limitation.
Through formal proof assistants like Coq [2], Lean [3], or
Isabelle [4], the verification of propositions (such as lemmas
or theorems) becomes definitive. Once verified, the proposition is conclusively validated, eliminating the need for further
evaluation.
This approach advocates for focusing on domains akin to
programming, yet characterized by determinate termination
states. The development of a system, potentially using agentbased models, is proposed. Such a system could subsequently
be applied to the autonomous optimization of source code,
thereby resolving the identified challenges.
**2. Objectives**
This research endeavors to advance the integration of machine
learning and artificial intelligence within the realm of formal
theorem proving, emphasizing the Coq Proof Assistant. By
developing a dedicated dataset, this work aims to refine ML
models, notably enhancing LLMs’ capabilities in processing
and generating Coq code. The objectives are meticulously
outlined to encompass:
**Enhance Syntax and Semantic Comprehension: Enhanc-**
ing LLMs’ proficiency in interpreting and generating Coq
code by providing a comprehensive dataset, thereby facilitating a deeper comprehension of Coq’s syntax, mathematical
logic, and proof strategies.
**Enable Autonomous Content Generation: Empowering**
LLMs to autonomously formulate mathematical definitions,
lemmas, examples, and exercises, adjusting the complexity to
bolster formal mathematics contributions.
**Optimize Coq Files for Machine Interaction: Refining Coq**
codebases for improved machine interaction through simplification and standardization, aiming for broader application
and usage.
**Facilitate Proof Generation: Equipping LLMs with the nec-**
essary tools for autonomous proof generation, laying a foundation for innovative advancements in formal proofs.
The pursuit of these objectives is anticipated to elevate
LLMs’ efficiency with Coq code, marking significant progress
in automated theorem proving and broadening the horizons
for formal mathematics and computer science research.
**3. Prior Art**
A singular comprehensive dataset, The Stack v2, has been
identified amidst extensive research efforts as encompassing
a diverse and extensive collection of Coq source code [5].
Hosting over 150,000 files, with nearly 80,000 under a permissive license, the dataset stands out by providing identifiers
for source code retrieval from S3 storage rather than including the code directly. Unprocessed raw data constitutes the
dataset’s format, presenting each file in a single row. Notably, precise and detailed license documentation is provided
for each file, an approach mirrored in the dataset discussed
herein.
On Huggingface [6], four additional datasets containing
Coq source code were found. Two of these datasets comprised
entire Coq files within single rows, leading to impractical
usability due to excessively large row sizes, with the largest
containing over 6 million characters [7, 8]. Although these collections were sizable, the licensing terms were inadequately
addressed, mixing data from various repositories under different licenses without proper license adherence. Queries
regarding licensing prompted the removal of both datasets.
CoqGym [9] presents another notable attempt, offering
a substantial collection under the Creative Commons Attribution 2.0 Generic License [10], which is incompatible with
the licenses of the included Coq source code [11]. The issue
of license compatibility remains unresolved [12]. Furthermore, CoqGym duplicated content from other projects into its
repository, resulting in a dataset that is now outdated by five
years.
The dataset ”coq code” [13] on Huggingface, though adhering to a step-by-step format (including hypothesis, goal,
and tactic), is limited, containing fewer than 25,000 entries.
Its formatting is suboptimal, with data merged into a single
text column and separated by special tags.
In parallel efforts to utilize machine learning for enhancing formal proving in Coq, research has been conducted on the
automation of lemma name generation, leveraging a dataset
constructed from approximately 450 Coq source files from
the math-comp project. This dataset, aimed at producing AST
and token files through preprocessing, encountered challenges
in data bloat and clarity, raising questions on its efficacy for
LLM training or fine-tuning. To date, there’s no documented
success in employing this specific dataset for LLM enhancement [14, 15]. Another effort was formatting Coq code using
language models. [16]
No datasets containing Coq source code were found on
Kaggle at the time of this writing. [17]
Against the backdrop of these endeavors, the dataset presented in this paper distinguishes itself through a unique combination of scale, organization, and focus on formal theorem
proving. Unlike previously mentioned datasets, which either
offer raw, unprocessed files or are constrained by licensing
and formatting issues, this dataset provides a curated and
processed collection of Coq code.
Two recent publications, although not directly related to
the dataset focus of this paper, share similar approaches or
motivations:
An approach is described where a large-scale, graph-based
dataset and a graph neural network are employed to dynamically integrate and leverage the hierarchical structure of definitions, theorems, and proofs within Coq. This method sig
-----
nificantly enhances AI agents’ capability to adapt to new
mathematical concepts and lemmas not encountered during
training, presenting a critical advancement in the automation
of theorem proving [18].
A novel methodology employing Monte Carlo Tree Search
(MCTS) to guide LLMs for the generation of verified programs in Dafny, Lean, and Coq, named VMCTS, enhances
synthesis capabilities by incorporating verifier feedback directly into the search algorithm, showcasing its efficiency
by solving complex verification problems in notably shorter
times compared to base models and even rivaling ChatGPT4’s
augmented capabilities [19].
**4. Data Sources**
The Coq source files for the datasets were meticulously collected from a diverse array of sources across the internet,
focusing on repositories that are pivotal within the Coq community and cover a broad spectrum of mathematical and computational theories. These sources encompass a range of
categories, including foundational libraries, formalized mathematical theorems, computer science concepts, and algorithm
implementations.
Foundational Libraries and Frameworks form the bedrock,
with repositories like the official Coq repository [20], mathcomp (Mathematical Components) [21], and Coq’s standard
library extensions [22]. These are essential for anyone working with Coq, offering basic definitions, theorems, and tactics
widely used in further Coq developments.
Formalized Mathematics and Theorem Proofs are represented through collections such as GeoCoq (geometry) [23],
the formal proofs of the Four Color Theorem [24], and various projects under the Coq-community umbrella focusing
on specific mathematical domains like algebra [25], number
theory [26], and logic [27]. These projects not only provide
proofs of known theorems but also extend the library of formalized mathematics accessible for Coq users.
Computer Science Theories and Algorithms feature prominently, with projects like Verdi (for distributed systems verification) [28], the Iris project for concurrent systems [29],
and various algorithm collections including sorting, graph
theory, and data structures. These repositories are crucial for
researchers and practitioners interested in the formal verification of software and algorithms.
The repositories were chosen for their quality, relevance to
the Coq community, and contribution to the ecosystem. The
collected datasets aim to provide comprehensive coverage of
the syntax and semantics employed in Coq development, supporting the project’s goal of enhancing LLMs’ understanding
and generation capabilities with respect to Coq code. The
datasets ensure a wide representation of the Coq language’s
potential applications, from pure mathematics to computer
science.
**5. Licenses**
Addressing the complexities of licensing within the context
of aggregating datasets from various sources is a non-trivial
challenge. [5] The datasets compiled for enhancing Large
Language Models’ (LLMs) comprehension and generation of
Coq code embody this challenge, as they amalgamate content
from a multitude of repositories, each governed by its unique
license. Given the diverse origins of the Coq source files,
the datasets do not subscribe to a singular license. Instead,
each row in the facts and proposition / proofs table link to the
license table where for each row the needed information can
be found.
To comply with the stipulations of these licenses, especially those like MIT which mandate the inclusion of original
licensing and authorship information, the dataset incorporates copies of the original license files and, where available,
the author files. This practice ensures adherence to the legal requirements of software redistribution, particularly for
open-source licenses that permit such activities.
The compilation strictly omits libraries or files that lack
an explicit open-source license or are under a commercial
license, thereby ensuring that the dataset comprises only data
that is legally redistributable. This careful selection process
is pivotal for maintaining the integrity and legality of the
datasets, facilitating their use in research and development
without infringing upon copyright laws or license conditions.
The dataset encompasses a wide range of licenses, reflecting the diversity of the Coq community and the broader
open-source ecosystem. Among these are:
- Apache License 2.0 (Apache-2.0)
- BSD 2-Clause ”Simplified” License (BSD-2-Clause)
- BSD 3-Clause ”New” or ”Revised” License (BSD-3Clause)
- CEA CNRS Inria Logiciel Libre License, versions 1.0,
2.1 (CECILL-1.0, CECILL-2.1), including its variants
CECILL-B and CECILL-C for library and plugin distributions, respectively
- GNU General Public License versions 2.0 only (GPL2.0-only), 3.0 only (GPL-3.0-only), and 3.0 or later
(GPL-3.0-or-later)
- GNU Lesser General Public License versions 2.1 only
(LGPL-2.1-only), 2.1 or later (LGPL-2.1-or-later), 3.0
only (LGPL-3.0-only), and 3.0 or later (LGPL-3.0-orlater)
- MIT License (MIT)
- Mozilla Public License 2.0 (MPL-2.0)
- UniMath License (specific to the UniMath library)
This approach ensures that the datasets not only respect the
legal and ethical considerations of software redistribution but
also provide a rich, legally compliant resource for advancing
the capabilities of LLMs in processing and generating Coq
code.
-----
**6. Dataset “coq-facts-props-proofs”**
This dataset is comprised of three distinct tables:
1. Definitions or notations categorized as facts.
2. Theorems and lemmas, alongside their proofs, classified
as propositions.
3. Licensing and repository information for each entry
within the facts and propositions tables.
License identification was conducted manually: a license
hint within the Readme file was prioritized, followed by the
contents of any LICENSE file. Only repositories under opensource licenses permitting redistribution were included.
The dataset exclusively features Coq source code files (.v
files), which were pre-processed using a customized OCaml
parser to separate Coq sentences, remove comments, and eliminate directives like #global. This process also involved
condensing multiple consecutive whitespaces into a single
space and deduplicating based on facts and proposition/proof
content rather than file origin. The preprocessing was purely
done on parsing level, no evluation of the Coq source code
was done. Consequently, some parts of the Coq code may not
evaluate or may not be compatible with the latest version of
Coq.
The facts table is one cornerstone of the dataset, encompassing definitions or notations. Each row within this table
represents a unique fact, identified by a Coq definition or notation. These facts are detailed through several key columns:
**fact the fact itself, presented in Coq syntax**
**imports a list of imports, specifying the Coq modules and**
libraries required for the fact’s context
**filename the filename, indicating the source file from which**
the fact was extracted
**symbolic name the symbolic name, providing a reference**
handle for the fact to the repository and license information.
The table props-proofs is the other key component
of the dataset. The structure is very similar to the facts table,
but instead of using the facts column there are two columns
**proposition and proof.**
The ”info” table within our dataset acts as a vital link
between the symbolic name and its corresponding repository,
enriched with precise licensing information. It is comprised
of four columns:
**symbolic name serving as a unique identifier correlating to**
entries within the ”facts” and ”props-proofs” tables
**url providing the repository’s location which hosts the source**
Coq files
**hexsha representing the Git SHA of the last commit at the**
time the repository was checked out, offering a snapshot
for reproducibility and version tracking
**spdx-id detailing the license under which the repository’s**
content is distributed, in alignment with the Software
Package Data Exchange (SPDX) identifiers.
The dataset is accessible on huggingface: [30].
**7. Statistics**
**7.1 info.parquet**
The info.parquet table comprises 142 rows, each representing a repository. The distribution of licenses across these
repositories is outlined below:
**License** **Count** **License** **Count**
MIT 43 Apache-2.0 3
LGPL-2.1-only 29 MPL-2.0 3
LGPL-2.1-or-later 12 GPL-3.0-only 3
CECILL-B 9 GPL-3.0-or-later 3
CECILL-1.0 7 BSD-3-Clause 2
LGPL-3.0-only 7 CECILL-2.1 2
LGPL-3.0-or-later 6 GPL-2.0-only 1
CECILL-C 6 CECILL-2.0 1
BSD-2-Clause 4 UniMath 1
**7.2 facts.parquet**
Data pertaining to the facts.parquet table is provided below,
with measurements based on character count:
|Columns Rows Shortest fact Longest fact Mean length Standard deviation|4 103,446 12 37,630 132.26 359.47|
|---|---|
Columns 4
Rows 103,446
Shortest fact 12
Longest fact 37,630
Mean length 132.26
Standard deviation 359.47
**7.3 props-proof.parquet**
Details regarding the props-proof.parquet table are summarized below, with lengths measured in characters:
|Columns Rows Shortest proposition Longest proposition Mean length proposition Standard deviation proposition Shortest proof Longest proof Mean length proof Standard deviation proof|5 166,035 13 7400 104.05 97.65 11 177585 347.88 1290.80|
|---|---|
Columns 5
Rows 166,035
Shortest proposition 13
Longest proposition 7400
Mean length proposition 104.05
Standard deviation proposition 97.65
Shortest proof 11
Longest proof 177585
Mean length proof 347.88
Standard deviation proof 1290.80
Observations indicate high standard deviations, attributed
to the presence of a few exceptionally long facts, propositions,
and proofs. The deviation pattern when excluding the top 5%
of length can be seen in figure 1.
-----
complete dataset described in this paper, specifically designed
to enhance its proficiency in interpreting and generating Coq
code and serves as the experiment to show the usefulness of
the dataset.
**Mistral-7b-Instruct-0.2 Based on the Mistral-7b architec-**
ture, this model leverages instructional data to guide its responses and programming language understanding. [31]
**Starcoder2-15b Starcoder2-15b has been trained on over 600**
different programming languages, including Coq, providing
it with a broad syntax and semantic understanding across a
wide array of languages. [5]
**Google Gemini This publicly available chat model from**
Google demonstrates capabilities in natural language processing and understanding, applied across various contexts,
including programming. [33]
**OpenAI ChatGPT 4 As OpenAI’s publicly available chat**
model, ChatGPT 4 showcases advancements in language models’ ability to engage in detailed conversations and generate
code snippets. [34]
**8.1 Experiment 1: n = n + 0**
**8.1.1 Prompt and Reference Proof**
For this experiment, the lemma tested was as follows:
1 Lemma plus_n_O : forall n:nat, n = n + 0.
The reference proof contained within the training data is
straightforward [23]:
1 Proof.
2 induction n; trivial.
3 Defined.
**8.1.2 Model Responses**
Among the 563 responses generated that began with Proof.,
141 were identified as valid (see section 12.1), demonstrating
the model’s adeptness not only in understanding Coq syntax
but also in navigating its semantic landscape to reach valid
conclusions through various methods.
Notably, the variety of proofs highlights the LLM’s capacity to utilize a broad spectrum of Coq’s proof strategies,
ranging from direct application of arithmetic simplification
(auto with arith.) to structural induction and recursive definitions (induction n as [| n IHn].). This
diversity not only showcases the potential of LLMs in theorem proving but also suggests a nuanced understanding of the
Coq proof assistant’s capabilities, opening new avenues for
exploring automated theorem proving.
These findings are particularly significant as they suggest
that LLMs, when equipped with a well-curated dataset, can
extend beyond mere syntactic correctness to exhibit a deep
comprehension of mathematical logic and proof strategies.
This depth enables the generation of multiple, distinctively
valid approaches to proving a single proposition, thereby enriching the repertoire of automated theorem proving.
**Figure 1. Deviation of length of proofs for the 0.95 percentile**
**8. Experiments**
In this section, we explore one of many possible applications of the dataset through the fine-tuning of an existing base
model, Mistral-7b [31]. This exercise is meant to serve as
an illustration of the dataset’s potential rather than a comprehensive or central focus of the paper. Our intention is
to demonstrate, via selected examples, how the dataset can
be utilized to potentially enhance LLM’s understanding and
generation of Coq code.
The fine-tuning process, performed on an NVidia A30
GPU across approximately seven days, involved adapting the
model to better handle Coq syntax and logic as represented
in the dataset. Every three hours a snapshot of the model was
generated. It’s important to note that while the model’s performance post fine-tuning provides insights into the dataset’s
utility, it represents only one of many possible evaluation
metrics.
The model’s output underwent evaluation at a temperature setting of 0.4 across different snapshots using coqc or
coqide. We curated the output for readability, truncating
responses at logical endpoints such as Qed., to focus on the
model’s capability to produce syntactically and logically coherent Coq constructs. These choices were guided by the goal
of assessing the model’s ability to generate syntactically and
logically correct Coq code, underlining the qualitative rather
than quantitative nature of this experiment. A version of the
model which was trained only using Coq code with permissive
licenses is publicity available [32].
Additionally, we made prompt adjustments to encourage
Coq-specific responses from the different models, indicating
the necessity of tailored inputs for optimal output in domainspecific tasks. The comparison of the fine-tuned model against
several prominent LLMs provides a broader context for evaluating the dataset’s impact on enhancing Coq code generation capabilities, albeit this comparison is illustrative of the
dataset’s potential rather than an exhaustive evaluation of its
efficacy.
The models under observation and for comparison:
**CoqLLM-FineTuned This model was fine-tuned with the**
-----
These implications reinforce the utility of specialized
datasets in enhancing the performance of LLMs within domainspecific tasks such as theorem proving.
**8.1.3 Comparative Model Responses**
**Mistral-7b-Instruct Responded in a non-Coq language and**
failed to generate a valid proof even after prompt adaptation.
**ChatGPT 4 Although replying in Coq, the proof offered was**
incorrect.
**Google Gemini Required prompt modification before pro-**
ducing a correct proof.
**Starcoder2-15b Did not provide any proof, despite being**
prompted.
**8.1.4 Discussion**
This experiment highlights the CoqLLM-FineTuned model’s
superior capability in producing correct Coq proofs that were
not part of its training set, distinguishing it from other models, including those of similar size and significantly larger
ones like ChatGPT 4 or Google Gemini. The model not only
demonstrated its understanding of Coq syntax and logic but
also its ability to creatively solve problems without directly
reproducing training data.
**8.2 Experiment 2: 7 + 3 = 10**
**8.2.1 Prompt and Theoretical Proof**
The prompt for this experiment was:
1 Lemma ex1: 7 + 3 = 10.
Notably, this specific lemma did not exist within the training dataset. However, a theoretically valid proof employing
basic reflexivity is suggested:
1 Proof.
2 reflexivity.
3 Qed.
**8.2.2 Model’s Response**
Remarkably, the CoqLLM-FineTuned model independently
arrived with most responses at the same proof as the one
proposed, successfully utilizing the reflexivity tactic.
**8.2.3 Comparison with Other Models**
**Mistral-7b-Instruct Failed to provide a valid Coq proof,**
responding inappropriately and deviating significantly from
the prompt.
**ChatGPT 4, Google Gemini, and Starcoder2-15b Each of**
these models managed to produce valid proofs, indicating a
general competence in handling straightforward arithmetic
propositions in Coq.
**8.2.4 Discussion**
This experiment underscores the performance of the CoqLLMFineTuned model in generating a valid proof for a proposition not present in its training set, further exemplifying
its advanced reasoning capabilities. Unlike the Mistral-7bInstruct model, which failed to generate a correct response,
the CoqLLM-FineTuned, alongside other prominent models
like ChatGPT 4, Google Gemini, and Starcoder2-15b, demonstrated proficiency in Coq syntax and logical reasoning.
**8.3 Experiment 3: S (m * n) = m * n + n.**
**8.3.1 Prompt and Challenge**
The lemma explored in this experiment was as follows:
1 Lemma mult_S : forall m n : nat, S (m * n) = m * n
+ n.
Intentionally erroneous, this lemma serves to test the
LLMs’ ability to recognize or question the validity of a proposition, essentially assigning them an impossible task.
**8.3.2 Discussion**
Despite the intrinsic fallacy in the lemma, all tested models,
including Mistral-7b-Instruct, ChatGPT 4, Google Gemini,
Starcoder2, and CoqLLM-FineTuned, endeavored to construct
a proof without indicating any recognition of the proposition’s
incorrectness. This uniform approach across diverse models
reveals a critical area for future enhancement in LLMs’ capabilities: the detection of inherently flawed or unsolvable
problems.
**9. Results**
The fine-tuning of a Large Language Model (LLM) with
the Coq dataset demonstrated promising outcomes, with the
model generating outputs with a high probability exclusively
in Coq syntax. This specificity in output underscores the
dataset’s effectiveness in aligning the trained model with the
requirements of both agent systems and Coq runtime environments, making it a preferred choice for these applications.
The endeavor also revealed the feasibility of achieving
significant advancements in model performance with limited resources and within a constrained timeframe. The refined model showcased an ability to produce insightful, Coqcompatible remarks, underscoring the potential for further
enhancing the efficiency of theorem proving in Coq.
Moreover, the careful curation, cleanup, and licensing of
the dataset not only facilitated this study but also ensure its
utility for the broader research community. This resource is
poised to contribute to the ongoing development of agents,
marking a crucial step in the journey towards more sophisticated and autonomous theorem proving systems.
Building upon these achievements, the notable success in
Experiment 1 (see section 8.1), where the fine-tuned LLMs
generated 141 valid proofs for the proposition n = n + 0 opens
a new vista for the application of LLMs in generating valuable
Coq source code. This accomplishment illustrates the models’
capacity not only to adhere to syntactic correctness but also to
engage in creative problem-solving within the Coq framework.
The presence of valid, varied proofs further underscores the
potential utility of LLMs as tools for enriching and expanding
Coq datasets with new, verified source code.
-----
**10. Outlook**
The successful fine-tuning of the Large Language Model
(LLM) using the Coq dataset opens up several promising
avenues for future research and application enhancements:
**Agent-based Application: The dataset can serve as a training**
data for models for developing agents capable of interacting
with, and reasoning about, Coq code. This could significantly
streamline processes in formal methods and theorem proving
by providing automated assistance.
**Refining Prompts with the Dataset: Utilizing the dataset to**
fine-tune prompts can enhance the generation of higher quality
and more relevant content. This improvement can bolster
the model’s capacity to tackle intricate problem-solving and
reasoning within formal verification’s scope.
**Hypothesis, Goal, Tactic Approach: Implementing a struc-**
tured approach that defines hypotheses, sets goals, and employs tactics could further sophisticate the model’s interaction
with formal proofs. This strategy could facilitate the development of more advanced models capable of autonomously
devising and verifying proofs, thus pushing the boundaries of
what’s possible in automated theorem proving.
**Curating, Enhancing, and Enlarging the Coq Source Code**
**Base Using LLMs: The demonstrated capacity of LLMs**
to generate valid and diverse Coq proofs opens the door to
utilizing these models for the curation, enhancement, and
expansion of the Coq source code base. By integrating LLMgenerated code into the dataset, we can continuously improve
the quality and breadth of Coq resources available to the
research community, fostering a more dynamic and robust
ecosystem for theorem proving and formal verification.
These directions not only underscore the potential for the
dataset to catalyze significant advancements in the field but
also highlight the broader implications for enhancing AI’s
role in formal methods and beyond.
**11. Acknowledgment**
The author extend his sincere gratitude to Andrei Agapi for his
invaluable ideas and for conducting the initial review, which
significantly contributed to the enhancement of this paper.
“Stylish Article” template [35] was used.
-----
**12. Appendix**
**12.1 141 Ways to Proof the Lemma**
The validation was performed using coqc version 8.19.1. In addition to the proposition, the validated coq source file contained
some Require Imports as well as some notations like plus 0 l or plus comm.
1 Proof. auto with arith. Qed.
2 Proof. destruct n; simpl; auto. Qed.
3 Proof. induction n; simpl; auto. Qed.
4 Proof. intro n; apply eq_add_S; auto with arith. Qed.
5 Proof. intro n; destruct n as [|p]. - reflexivity. - simpl; auto. Qed.
6 Proof. intro n; elim n ; auto with arith. Qed.
7 Proof. intro n; elim n ; simpl; auto. Qed.
8 Proof. intro n; elim n using nat_rect. - reflexivity. - intros m Hrec; simpl; auto. Qed.
9 Proof. intro n; elim n using nat_rect. - reflexivity. - intros m Hrec; simpl; rewrite <- Hrec; auto with
arith. Qed.
10 Proof. intro n; elim n using nat_rect. - reflexivity. - simpl. auto with arith. Qed.
11 Proof. intro n; elim n using nat_rect; auto. Qed.
12 Proof. intro n; elim n. - reflexivity. - intros m IHm. simpl. rewrite <- IHm. reflexivity. Qed.
13 Proof. intro n; elim n. - reflexivity. - intros p IHp; simpl; rewrite <- IHp; reflexivity. Qed.
14 Proof. intro n; elim n; auto with arith. Qed.
15 Proof. intro n; elim n; auto. Qed.
16 Proof. intro n; elim n; simpl. - reflexivity. - intros m IHm; rewrite <- IHm; reflexivity. Qed.
17 Proof. intro n; elim n; simpl. reflexivity. intros m IH. rewrite <- IH. auto with arith. Qed.
18 Proof. intro n; elim n; simpl. reflexivity. intros m IHm. rewrite <- IHm. reflexivity. Qed.
19 Proof. intro n; elim n; simpl; auto with arith. Qed.
20 Proof. intro n; elim n; simpl; auto. Qed.
21 Proof. intro n; induction n as [ | n IHn]. - reflexivity. - simpl; rewrite <- IHn; auto. Qed.
22 Proof. intro n; induction n as [ | p IHp]. - reflexivity. - simpl; rewrite <- IHp; reflexivity. Qed.
23 Proof. intro n; induction n as [| m IHm]; auto with arith. Qed.
24 Proof. intro n; induction n as [| n Hrecn]. - reflexivity. - simpl; rewrite <- Hrecn; reflexivity. Qed.
25 Proof. intro n; induction n as [| n IH]. - reflexivity. - simpl. apply f_equal. assumption. Qed.
26 Proof. intro n; induction n as [| n IH]. - reflexivity. - simpl. rewrite <- IH. reflexivity. Qed.
27 Proof. intro n; induction n as [| n IH]. - reflexivity. - simpl; auto. Qed.
28 Proof. intro n; induction n as [| n IH]. - reflexivity. - simpl; rewrite <- IH; reflexivity. Qed.
29 Proof. intro n; induction n as [| n IH]; auto. Qed.
30 Proof. intro n; induction n as [| n IH]; simpl. - reflexivity. - rewrite <- IH; ring. Qed.
31 Proof. intro n; induction n as [| n IH]; simpl; auto. Qed.
32 Proof. intro n; induction n as [| n IHn]. - reflexivity. - simpl. rewrite <- IHn. reflexivity. Qed.
33 Proof. intro n; induction n as [| n IHn]. - reflexivity. - simpl; auto. Qed.
34 Proof. intro n; induction n as [| n IHn]. - reflexivity. - simpl; now rewrite <- IHn. Qed.
35 Proof. intro n; induction n as [| n IHn]. - reflexivity. - simpl; rewrite <- (IHn); auto. Qed.
36 Proof. intro n; induction n as [| n IHn]. - reflexivity. - simpl; rewrite <- (plus_n_O n); reflexivity.
Qed.
37 Proof. intro n; induction n as [| n IHn]. - reflexivity. - simpl; rewrite <- IHn at 1; reflexivity. Qed.
38 Proof. intro n; induction n as [| n IHn]. - reflexivity. - simpl; rewrite <- IHn; auto. Qed.
39 Proof. intro n; induction n as [| n IHn]. - reflexivity. - simpl; rewrite <- IHn; reflexivity. Qed.
40 Proof. intro n; induction n as [| n IHn]. - reflexivity. - simpl; rewrite IHn at 1; reflexivity. Qed.
41 Proof. intro n; induction n as [| p Hp]. - reflexivity. - simpl; rewrite <- Hp; reflexivity. Qed.
42 Proof. intro n; induction n as [| p Hp]. - simpl; auto with arith. - simpl; rewrite <- Hp; reflexivity.
Qed.
43 Proof. intro n; induction n as [| p IHp]. - reflexivity. - simpl. rewrite <- (IHp). reflexivity. Qed.
44 Proof. intro n; induction n as [| p IHp]. - reflexivity. - simpl. rewrite <- IHp. reflexivity. Qed.
45 Proof. intro n; induction n as [| p IHp]. - reflexivity. - simpl; auto with arith. Qed.
46 Proof. intro n; induction n as [| p IHp]. - reflexivity. - simpl; auto. Qed.
47 Proof. intro n; induction n as [| p IHp]. - reflexivity. - simpl; rewrite -> IHp at 1; reflexivity. Qed.
48 Proof. intro n; induction n as [| p IHp]. - reflexivity. - simpl; rewrite <- IHp at 1; reflexivity. Qed.
49 Proof. intro n; induction n as [| p IHp]. - reflexivity. - simpl; rewrite <- IHp; auto with arith. Qed.
50 Proof. intro n; induction n as [| p IHp]. - reflexivity. - simpl; rewrite <- IHp; auto. Qed.
51 Proof. intro n; induction n as [| p IHp]. - reflexivity. - simpl; rewrite <- IHp; reflexivity. Qed.
52 Proof. intro n; induction n as [|n IH]. - reflexivity. - simpl; rewrite <- IH. reflexivity. Qed.
53 Proof. intro n; induction n as [|n IHn]. - reflexivity. - simpl; auto. Qed.
54 Proof. intro n; induction n as [|n IHn]. - reflexivity. - simpl; rewrite <- IHn; reflexivity. Qed.
55 Proof. intro n; induction n as [|n IHn]; simpl; auto. Qed.
56 Proof. intro n; induction n as [|n’ IHn’]. - reflexivity. - simpl; rewrite <- IHn’; reflexivity. Qed.
57 Proof. intro n; replace 0 with (S 0 - 1); auto. Qed.
58 Proof. intro n; rewrite (plus_comm n 0); auto with *. Qed.
59 Proof. intro n; rewrite (plus_comm n 0); auto with arith. Qed.
60 Proof. intro n; rewrite (plus_comm n 0); auto. Qed.
-----
61 Proof. intro n; rewrite (plus_comm n 0); reflexivity. Qed.
62 Proof. intro n; rewrite (plus_comm n 0); simpl; auto. Qed.
63 Proof. intro n; rewrite (plus_comm n 0); trivial. Qed.
64 Proof. intro n; rewrite <- (plus_n_O n); reflexivity. Qed.
65 Proof. intro n; rewrite <- Nat.add_0_r at 1. reflexivity. Qed.
66 Proof. intro n; rewrite Nat.add_0_r; reflexivity. Qed.
67 Proof. intro n; rewrite Nat.add_comm; auto. Qed.
68 Proof. intro n; rewrite Nat.add_comm; reflexivity. Qed.
69 Proof. intro n; rewrite add_comm. reflexivity. Qed.
70 Proof. intro n; rewrite add_comm; auto with arith. Qed.
71 Proof. intro n; rewrite add_comm; reflexivity. Qed.
72 Proof. intro n; rewrite plus_O_r; reflexivity. Qed.
73 Proof. intro n; rewrite plus_comm with (m := 0); auto. Qed.
74 Proof. intro n; rewrite plus_comm; apply plus_O_n. Qed.
75 Proof. intro n; rewrite plus_comm; auto. Qed.
76 Proof. intro n; rewrite plus_comm; exact (plus_O_n n). Qed.
77 Proof. intro n; rewrite plus_comm; reflexivity. Qed.
78 Proof. intro n; simpl. auto with arith. Qed.
79 Proof. intro n; simpl; auto with arith. Qed.
80 Proof. intro n; simpl; auto. Qed.
81 Proof. intro; apply eq_add_S ; auto with arith. Qed.
82 Proof. intro; apply eq_add_S; auto. Qed.
83 Proof. intro; apply nat_ind with (P := fun n => n = n + O). - reflexivity. - intros; simpl; auto. Qed.
84 Proof. intro; apply sym_eq; apply Nat.add_0_r. Qed.
85 Proof. intro; elim n using nat_rect. - reflexivity. - intros m Hrec; simpl in |- *; rewrite Hrec; auto
with arith. Qed.
86 Proof. intro; elim n. - reflexivity. - intros; simpl; auto with arith. Qed.
87 Proof. intro; elim n; simpl; auto. Qed.
88 Proof. intro; induction n as [| n IH]; simpl; auto. Qed.
89 Proof. intro; induction n as [| n IHn]. - reflexivity. - simpl; rewrite <- IHn; reflexivity. Qed.
90 Proof. intro; induction n as [| p IHp]; simpl; auto with arith. Qed.
91 Proof. intro; induction n as [| p IHp]; simpl; auto. Qed.
92 Proof. intro; induction n as [|n IH]; simpl; auto. Qed.
93 Proof. intro; induction n; simpl; auto. Qed.
94 Proof. intro; rewrite <- (Nat.add_comm 0); reflexivity. Qed.
95 Proof. intro; rewrite <- (add_comm 0); apply plus_O_n. Qed.
96 Proof. intro; rewrite <- (plus_n_O n); reflexivity. Qed.
97 Proof. intro; rewrite <- add_comm; auto with arith. Qed.
98 Proof. intro; rewrite <- add_comm; auto. Qed.
99 Proof. intro; rewrite <-plus_n_O. reflexivity. Qed.
100 Proof. intro; rewrite Nat.add_0_r; reflexivity. Qed.
101 Proof. intro; rewrite Nat.add_comm; auto with arith. Qed.
102 Proof. intro; rewrite Nat.add_comm; auto. Qed.
103 Proof. intro; rewrite Nat.add_comm; reflexivity. Qed.
104 Proof. intro; rewrite Nat.add_comm; trivial. Qed.
105 Proof. intro; rewrite add_comm; auto with *. Qed.
106 Proof. intro; rewrite add_comm; auto with arith. Qed.
107 Proof. intro; rewrite add_comm; auto. Qed.
108 Proof. intro; rewrite add_comm; reflexivity. Qed.
109 Proof. intro; rewrite add_comm; simpl; auto. Qed.
110 Proof. intro; rewrite add_comm; simpl; reflexivity. Qed.
111 Proof. intro; rewrite plus_comm; apply plus_O_n. Qed.
112 Proof. intro; rewrite plus_comm; auto with arith. Qed.
113 Proof. intro; rewrite plus_comm; auto. Qed.
114 Proof. intro; rewrite plus_comm; reflexivity. Qed.
115 Proof. intro; rewrite plus_comm; simpl; reflexivity. Qed.
116 Proof. intro; ring. Qed.
117 Proof. intro; ring_simplify; reflexivity. Qed.
118 Proof. intro; ring_simplify; trivial. Qed.
119 Proof. intro; simpl; auto with arith. Qed.
120 Proof. intros n; now rewrite <- Nat.add_comm. Qed.
121 Proof. intros. rewrite <- Nat.add_comm. apply eq_refl. Qed.
122 Proof. intros. rewrite <- plus_comm. reflexivity. Qed.
123 Proof. intros. rewrite Nat.add_comm; reflexivity. Qed.
124 Proof. intros; now rewrite Nat.add_0_r. Qed.
125 Proof. intros; now rewrite Nat.add_comm. Qed.
126 Proof. intros; rewrite <- (plus_n_O n); reflexivity. Qed.
127 Proof. intros; rewrite <- Nat.add_comm; reflexivity. Qed.
128 Proof. intros; rewrite <- add_comm; reflexivity. Qed.
129 Proof. intros; rewrite <- plus_n_O; reflexivity. Qed.
-----
130 Proof. intros; rewrite Nat.add_comm; apply Nat.add_0_l. Qed.
131 Proof. intros; rewrite Nat.add_comm; apply add_O_l. Qed.
132 Proof. intros; rewrite Nat.add_comm; apply plus_O_n. Qed.
133 Proof. intros; rewrite Nat.add_comm; reflexivity. Qed.
134 Proof. intros; rewrite plus_comm; exact (plus_O_n n). Qed.
135 Proof. intros; ring. Qed.
136 Proof. intros; simpl; auto with arith. Qed.
137 Proof. simpl. auto with arith. Qed.
138 Proof. simple induction n; auto. Qed.
139 Proof. simple induction n; simpl in |- *; auto with arith. Qed.
140 Proof. simple induction n; simpl; auto with arith. Qed.
141 Proof. simple induction n; simpl; auto. Qed.
**References**
[1] Andreas Florath. LLM Interactive Optimization of Open Source Python Libraries – Case Studies and Generalization.
[2024. arXiv: 2312.14949 [cs.SE].](https://arxiv.org/abs/2312.14949)
[2] [The Coq Development Team. The Coq Proof Assistant. accessed 2024-02-29. URL: https://coq.inria.fr/.](https://coq.inria.fr/)
[3] _[Lean. accessed 2024-03-18. URL: https://lean-lang.org.](https://lean-lang.org)_
[4] _[Isabelle. accessed 2024-03-18. URL: https://isabelle.in.tum.de.](https://isabelle.in.tum.de)_
[5] [Anton Lozhkov et al. StarCoder 2 and The Stack v2: The Next Generation. 2024. arXiv: 2402.19173 [cs.SE].](https://arxiv.org/abs/2402.19173)
[6] _[Huggingface Datasets. accessed 2024-03-01. URL: https://huggingface.co/datasets.](https://huggingface.co/datasets)_
[7] _[Huggingface Dataset: coq-github-scrape. accessed 2024-02-27. URL: https://huggingface.co/datasets/](https://huggingface.co/datasets/cassanof/coq-github-scrape)_
[cassanof/coq-github-scrape.](https://huggingface.co/datasets/cassanof/coq-github-scrape)
[8] _[Huggingface Dataset: coq-train. accessed 2024-02-27. URL: https://huggingface.co/datasets/metareflection/](https://huggingface.co/datasets/metareflection/coq-train)_
[coq-train.](https://huggingface.co/datasets/metareflection/coq-train)
[9] Kaiyu Yang and Jia Deng. “Learning to Prove Theorems via Interacting with Proof Assistants”. In: International
_Conference on Machine Learning (ICML). 2019._
[10] _[CC BY 2.0 LEGAL CODE Attribution 2.0 Generic. accessed 2024-03-01. URL: https://creativecommons.](https://creativecommons.org/licenses/by/2.0/legalcode.en)_
[org/licenses/by/2.0/legalcode.en.](https://creativecommons.org/licenses/by/2.0/legalcode.en)
[11] _[ShareAlike compatibility: GPLv3. accessed 2024-03-01. URL: https://wiki.creativecommons.org/wiki/](https://wiki.creativecommons.org/wiki/ShareAlike_compatibility:_GPLv3)_
[ShareAlike_compatibility:_GPLv3.](https://wiki.creativecommons.org/wiki/ShareAlike_compatibility:_GPLv3)
[12] _[License Compatibility Review Suggested for Dataset. accessed 2024-03-18. URL: https : / / github . com /](https://github.com/princeton-vl/CoqGym/issues/87)_
[princeton-vl/CoqGym/issues/87.](https://github.com/princeton-vl/CoqGym/issues/87)
[13] _[Dataset jbb/coq code. accessed 2024-03-01. URL: https://huggingface.co/datasets/jbb/coq_code.](https://huggingface.co/datasets/jbb/coq_code)_
[14] Pengyu Nie et al. “Deep Generation of Coq Lemma Names Using Elaborated Terms”. In: International Joint Conference
_[on Automated Reasoning. 2020, pp. 97–118. DOI: 10.1007/978-3-030-51054-1_6.](https://doi.org/10.1007/978-3-030-51054-1_6)_
[15] _[MathComp Corpus. accessed 2024-03-08. URL: https://github.com/EngineeringSoftware/math-](https://github.com/EngineeringSoftware/math-comp-corpus)_
[comp-corpus.](https://github.com/EngineeringSoftware/math-comp-corpus)
[16] [Pengyu Nie et al. Learning to Format Coq Code Using Language Models. 2020. arXiv: 2006.16743 [cs.HC].](https://arxiv.org/abs/2006.16743)
[17] _[Kaggle datasets. accessed 2024-03-01. URL: https://www.kaggle.com/datasets.](https://www.kaggle.com/datasets)_
[18] Jason Rute et al. Graph2Tac: Learning Hierarchical Representations of Math Concepts in Theorem proving. 2024. arXiv:
[2401.02949 [cs.LG].](https://arxiv.org/abs/2401.02949)
[19] David Brandfonbrener et al. Verified Multi-Step Synthesis using Large Language Models and Monte Carlo Tree Search.
[2024. arXiv: 2402.08147 [cs.SE].](https://arxiv.org/abs/2402.08147)
[20] _[Coq. accessed 2024-03-01. URL: https://github.com/coq/coq.](https://github.com/coq/coq)_
[21] _[Mathematical Components. accessed 2024-03-01. URL: https://github.com/math-comp.](https://github.com/math-comp)_
[22] _[coq-ext-lib. accessed 2024-03-01. URL: https://github.com/coq-community/coq-ext-lib.git.](https://github.com/coq-community/coq-ext-lib.git)_
[23] _[GeoCoq. accessed 2024-03-01. URL: https://github.com/GeoCoq/GeoCoq.](https://github.com/GeoCoq/GeoCoq)_
-----
[24] _[The Four Color Theorem. accessed 2024-03-01. URL: https://github.com/coq-community/fourcolor.](https://github.com/coq-community/fourcolor.git)_
[git.](https://github.com/coq-community/fourcolor.git)
[25] _[algebra-tactics. accessed 2024-03-01. URL: https://github.com/math-comp/algebra-tactics.git.](https://github.com/math-comp/algebra-tactics.git)_
[26] _[coqprime. accessed 2024-03-01. URL: https://github.com/thery/coqprime.](https://github.com/thery/coqprime)_
[27] _[100 famous theorems proved using Coq. accessed 2024-03-01. URL: https://github.com/coq-community/](https://github.com/coq-community/coq-100-theorems.git)_
[coq-100-theorems.git.](https://github.com/coq-community/coq-100-theorems.git)
[28] _[verdi. accessed 2024-03-01. URL: https://github.com/uwplse/verdi.](https://github.com/uwplse/verdi)_
[29] _[stdpp. accessed 2024-03-07. URL: https://gitlab.mpi-sws.org/iris/stdpp.git.](https://gitlab.mpi-sws.org/iris/stdpp.git)_
[30] _[Coq Facts, Propositions and Proofs. accessed 2024-03-18. URL: https://huggingface.co/datasets/](https://huggingface.co/datasets/florath/coq-facts-props-proofs-gen0-v1)_
[florath/coq-facts-props-proofs-gen0-v1.](https://huggingface.co/datasets/florath/coq-facts-props-proofs-gen0-v1)
[31] [Albert Q. Jiang et al. Mistral 7B. 2023. arXiv: 2310.06825 [cs.CL].](https://arxiv.org/abs/2310.06825)
[32] _[CoqLLM-FineTuned-Experiment-Gen0. accessed 2024-03-18. URL: https://huggingface.co/florath/](https://huggingface.co/florath/CoqLLM-FineTuned-Experiment-Gen0)_
[CoqLLM-FineTuned-Experiment-Gen0.](https://huggingface.co/florath/CoqLLM-FineTuned-Experiment-Gen0)
[33] [Gemini Team et al. Gemini: A Family of Highly Capable Multimodal Models. 2023. arXiv: 2312.11805 [cs.CL].](https://arxiv.org/abs/2312.11805)
[34] [OpenAI et al. GPT-4 Technical Report. 2024. arXiv: 2303.08774 [cs.CL].](https://arxiv.org/abs/2303.08774)
[35] _[Stylish Article. https://www.latextemplates.com/template/stylish-article. Accessed: 2023-11-](https://www.latextemplates.com/template/stylish-article)_
01.
-----
| [
"Andreas, Florath"
] | 2024-04-02T00:00:00 | null | false | 0 | 0 | [
"Coq"
] | http://arxiv.org/abs/2403.12627 | https://arxiv.org/abs/2403.12627 | https://www.semanticscholar.org/paper/7ad2758f011d13c58d9bff8a6d4457ee85656d8a |
Enhancing Large Language Models for Natural Language Mathematical Reasoning via Formal Proof AutoInformalization | This study introduces a method to improve Large Language Models’ (LLMs) mathematical reasoning capabilities by integrating formal proofs from Interactive Theorem Provers (ITPs) into their training. We fine-tune GPT-3.5, Mistral-7B, and Gemma-7B models with datasets pairing formal and informal proofs. The effectiveness of this approach is assessed using the Hendrycks MATH dataset and Massive Multitask Language Understanding (MMLU) benchmark. Results show improvements in LLMs’ performance on various mathematical categories, suggesting the potential of formal proofs to advance LLMs’ reasoning abilities. Further exploration of diverse formal proofs and advanced fine-tuning techniques is necessary to bolster LLMs’ formal mathematics comprehension. | null | [
"Brando, Miranda",
"Rishi, Padmanabhan",
"Shane, Mion",
"Ameya, Jadhav"
] | 2024-09-01T00:00:00 | null | false | 0 | 0 | null | null | null | null |
|
Enhancing Length Extrapolation in Sequential Models with Pointer-Augmented Neural Memory | We propose Pointer-Augmented Neural Memory (PANM) to help neural networks understand and apply symbol processing to new, longer sequences of data. PANM integrates an external neural memory that uses novel physical addresses and pointer manipulation techniques to mimic human and computer symbol processing abilities. PANM facilitates pointer assignment, dereference, and arithmetic by explicitly using physical pointers to access memory content. Remarkably, it can learn to perform these operations through end-to-end training on sequence data, powering various sequential models. Our experiments demonstrate PANM's exceptional length extrapolating capabilities and improved performance in tasks that require symbol processing, such as algorithmic reasoning and Dyck language recognition. PANM helps Transformer achieve up to 100% generalization accuracy in compositional learning tasks and significantly better results in mathematical reasoning, question answering and machine translation tasks. | PANM helps Transformer achieve up to 100% generalization accuracy in compositional learning tasks and significantly better results in mathematical reasoning, question answering and machine translation tasks. | ## Enhancing Length Extrapolation in Sequential Models with Pointer-Augmented Neural Memory
Hung Le, Dung Nguyen, Kien Do, Svetha Venkatesh, Truyen Tran
Applied AI Institute, Deakin University, Geelong, Australia
[email protected]
Abstract
We propose Pointer-Augmented Neural Memory (PANM) to help neural networks understand and apply symbol processing to new, longer sequences of data.
PANM integrates an external neural memory that uses novel physical addresses
and pointer manipulation techniques to mimic human and computer symbol processing abilities. PANM facilitates pointer assignment, dereference, and arithmetic by explicitly using physical pointers to access memory content. Remarkably, it can learn to perform these operations through end-to-end training on sequence data, powering various sequential models. Our experiments demonstrate
PANM’s exceptional length extrapolating capabilities and improved performance
in tasks that require symbol processing, such as algorithmic reasoning and Dyck
language recognition. PANM helps Transformer achieve up to 100% generalization accuracy in compositional learning tasks and significantly better results in
mathematical reasoning, question answering and machine translation tasks.
1 Introduction
Systematic generalization underpins intelligence, and it relies on the ability to recognize abstract
rules, extrapolating them to novel contexts that are distinct yet semantically similar to the seen
data. Current neural networks or statistical machine learning fall short of handling novel data generated by symbolic rules even though they have achieved state-of-the-art results in various domains.
Some approaches can show decent generalization for single or set input data [Bahdanau et al., 2018,
Gao et al., 2020, Webb et al., 2020]. Yet, neural networks in general still fail in sequential symbol
processing tasks, even with slight novelty during inference [Lake and Baroni, 2018, Del´etang et al.,
2022]. For instance, these models can easily learn to duplicate sequences of 10 items, but they will
fail to copy sequences of 20 items if they were not part of the training data. These models overfit the
training data and perform poorly on out-of-distribution samples such as sequences of greater length
or sequences with novel compositions. The issue also affects big models like Large Language Models, making them struggle with symbolic manipulation tasks [Qian et al., 2023]. This indicates that
current methods lack a principled mechanism for systematic generalization.
From a neuroscience perspective, it has been suggested that the brain can execute symbol processing
through variable binding and neural pointers, wherein the sensory data are conceptualized into symbols that can be assigned arbitrary values [Kriete et al., 2013]. Like the brain, computer programs
excel at symbolic computations. Programmers use address pointers to dynamically access data or
programs, and have flexible control over the variable. Their programs can work appropriately with
unseen inputs.
Building on these insights, we propose a pointer-based mechanism to enhance generalization to unseen length in sequence prediction, which is a crucial problem that unifies all computable problems
[Solomonoff, 2010]. Our mechanism is based on two principles: (I) explicitly modeling pointers as
physical addresses, and (II) strictly isolating pointer manipulation from input data. As such, we need
Preprint. Under review.
-----
Figure 1: PANM architecture. (a) The data memory contains the encoded input sequence (b) The
address bank contains physical addresses associated with data memory slots. The base and end
addresses (pB, pE) define the address range of the input sequence. (c) The Pointer Unit takes pB, pE,
recurrently generates the current pointer p[a]t [and gets its value][ ∗][p]t[a] [via Mode-1 (red)/2 (green) Access.]
(d) The Controller takes pointer information, decoding input (zt = yt−), and produce the t-th output
token ˆyt.
to design a memory that supports physical pointers, and create a model that manipulates the pointers
to perform abstract rules and access to the memory. Our memory, dubbed Pointer-Augmented Neural Memory (PANM), is slot-based RAM [Von Neumann, 1993] where each memory slot consists
of two components: data and address. Unlike initial endeavors that implicitly model pointers as
attention softmax [Vinyals et al., 2015, Kurach et al., 2015, Le et al., 2018, Khan et al., 2021], our
addresses are generated to explicitly simulate physical memory addresses, i.e., incremental binary
numbers, which is critical for generalization to longer sequences.
To manipulate a pointer, we create an address bank that contains physical addresses corresponding to
the input sequence, and use a neural network called Pointer Unit that is responsible for transforming
pointers from an initial address in the address bank. Through attention to the address bank, a new
pointer is generated as a mixture of the physical addresses, which can point to different memory slots
to follow the logic of the task. We aim to let the Pointer Unit learn the symbolic rules of the task in
an end-to-end manner. Finally, given a (manipulated) pointer, the model can access the data through
2 modes of pointer-based access: pointer dereference (Mode-1) and relational access (Mode-2). Our
memory can be plugged into common encoder-decoder backbones such as LSTM or Transformer.
Our contribution is a novel memory architecture that incorporates explicit pointer and symbol processing, working seamlessly with sequential models to generalize better. We examine our model
in symbol-processing domains such as algorithms and context-free grammar where PANM effectively works with LSTM and StackRNN. We apply PANM to improve the generalization of Transformer models on compositional learning, using SCAN and mathematics datasets. Also, we observe
PANM’s superior performance in more realistic question answering and machine translation tasks.
Our focus is not on striving for state-of-the-art results requiring specialized designs tailored to specific tasks. Our objective is to highlight the generalization improvement achieved by integrating our
memory module into fundamental sequential models, with minimal architectural changes, and showcase the importance of using fundamental generalizing principles to address limitations of current
deep learning.
2 Methods
2.1 Problem Formulation
l(Xi)
In sequence-to-sequence (s2s) problems, each data point is an input sequence Xi = x[i]t t=1 [,]
l(Yi)
associated with a target sequence Yi = yt[i] t=1 [where][ l][ is a function returning the length of the]
l( ˆYi)
sequence. A model Φ takes the input Xi and generates an output sequence Y[ˆ]i = yˆt[i] t=1 [where]
the predicted sequence terminates as the model outputs a special token ˆy[i]
t=l( Y[ˆ]i) [=][ <] [ eos][ >][. Each]
predicted token is sampled from a categorical distribution, parameterized by Φ and conditioned on
-----
the input sequence and optionally with previous output tokens: ˆyt[i] [∼] [p][Φ][(][y][t][|][X][i][, y]t[i]−[)][ where][ y]t[i]− [can]
t−1 t−1
be yk[i] k=1 [(true outputs) or] yˆk[i] k=1 [(predicted outputs) or even zero, depending on the setting]
(training or inference). We train Φ by minimizing the following cross-entropy loss:
log pΦ(yt[i][|][X][i][, y]t[i]−[)]
L = Ei
We are interested in the ability to handle inputs of arbitrary length, so we focus on settings in which
the length of testing input sequences is larger than that of training ones: max l(Xi) < min l(Xj)
with Xi ∈Dtrain and Xj ∈Dtest. In the following sections, when there is no confusion, we will
drop the sample index i or j for ease of reading. We note that autoregression is a special case of the
s2s formulation where the input and output are from the same domain, and the target sequence is
one step ahead of the input sequence.
2.2 Pointer Modeling
Computers are powerful thanks to their ability to manipulate pointers. These pointers store the
address of data in memory. Following C programming language notation, let p denote a pointer
associated with a data d in memory M, then p is also the address of the memory slot containing d,
i.e., p = &d. We can access the data via p as [∗]p = d, which is also known as pointer dereference.
We can manipulate the pointer to execute various tasks. For example, if the task is to copy the list X
to a new list Y, repeating this simple pointer-based procedure performs correctly regardless of the
list length and the values in X: [∗]&Y = [∗]&X ; &Y = &Y + 1 ; &X = &X + 1 .
In this paper, we propose a way to model pointers by constructing a bank of addresses, analogous to
the addresses in computer architecture. The address bank starts with a base address pB and increases
to form an arithmetic sequence with the common difference of 1. For example, if the bank has 3
addresses and pB = 3, the addresses are A = {3, 4, 5}. We represent the address as b-bit binary
vectors, so we have A = {0010, 0011, 0100} when b = 4. The address space is 2[b] b-bit binary
vectors. Given a memory M containing l(M) slots, we bind each slot to an address in the address bank
A such that A[t] = &M[t] and [∗]A[t] = M[t] (1 ≤ t ≤ l(M)). We use pE as the end address to refer to
the final address corresponding to the last item in M.
The memory M stores the input sequence, and its size depends on the input length: l(M) = l(X). To
enable generalization, the address space should cover a wide range of addresses that is greater than
the sequence length range (i.e., 2[b] > max l(X)). More importantly, during training, all possible
addresses should be exposed to the model. Otherwise, any unexposed address will confuse the model
when it appears during inference. As such, during training, we uniformly sample the base address
pB from the address space to construct the address bank A for each sequence memory M. This ensures
any address in the address space will eventually appear between the base and end addresses. See
Appendix B for an illustrative example of the base address sampling mechanism and its complexity.
Given the address bank, we can perform pointer-based procedures to achieve generalization. To
do so, we need pointer variables pt denoting pointers used by the model at timestep t. As for the
copy task, the model outputs correctly by accessing and manipulating the pointer variables through
3 important pointer operations: pt = A[t] (assignment); ˆyt[i] [=][ ∗][p][t][ (][dereference][);][ p][t][ =][ p][t][ + 1]
(arithmetic), which will be described in the next section.
2.3 Pointer-Augmented Neural Memory (PANM)
PANM acts as an external memory module for any neural network to support it handling sequence
data. In such a memory-augmented neural network, a neural Controller (Ctrl) interacts with the
memory (M) to read/write data and make predictions on the output target. Unlike traditional neural
memory, PANM is equipped with an address bank (A) and a Pointer Unit (PU) to support pointer
operations. To simplify memory writing operations, PANM transforms the whole input sequence
X to the memory M in the encoding phase such that L = l(M) = l(X) using M = Encoderθ (X)
where X ∈ R[d][x][×][L], M ∈ R[d][m][×][L] and the Encoder, parameterized by θ, can be any neural encoder
such as LSTM or Transformer. The address bank A ∈{0, 1}[b][×][L] is then created and bound to M as
mentioned in the previous section. During decoding, the encoded information in M is not changed
and the controller focuses only on reading from M to produce the right output sequence. An overview
of PANM decoding process is given in Fig. 1.
-----
2.3.1 Pointer Unit
At each timestep t of the decoding process, PANM makes use of pointer variables p[a]t [, which are]
initialized as a valid address in the address space and then updated by the Pointer Unit PU. In
particular, the PU, implemented as an GRU [Chung et al., 2014], takes an address from A as its
initial inputs, e.g., p[a]0 [=][ p][B][, and recurrently produces a key][ h]t[a] [that performs][ address attention][ to]
create succeeding pointer variables p[a]t [:]
h[a]t [=][ GRU][ϕ] p[a]t−1[, h]t[a]−1 (1)
h[a]t [g]ϕ[a] [(][A][[][n][])]
wt[a] [[][n][] = softmax]
(2)
∥h[a]t [∥] gϕa [(][A][[][n][])]
p[a]t [=][ A][w]t[a] (3)
where h[a]0 [is initialized as][ 0][ and][ 1][ ≤] [n][ ≤] [l][(][X][)][,][ ϕ][ denotes the parameter of the][ PU][ and][ g][a][ (][·][)][ is]
a feed-forward neural network to transform the address to the same space as h[a]t [. According to][ §]
1’s principle I, p[a]t [is “softly”][ assigned][ a physical address value in the address bank. Our pointer,]
p[a]t [, offers several advantages over “implicit pointers” made up of the attention weights (][w]t[a][), which]
are commonly utilized in previous works [Vinyals et al., 2015, Luong et al., 2015]. First, p[a]t [is a]
combination of physical addresses represented by binary numbers, and therefore its dimension is
generally independent of the sequence length. In contrast, the dimension of wt[a] [varies with the]
input length. Therefore, arithmetic transformations on p[a]t [are easier than on][ w]t[a][. Second, longer]
testing length poses challenges for traditional attentions to accurately produce wt[a] [pointing to unseen]
location. Using “physical key” A to compute wt[a] [mitigates this issue by employing random physical]
address ranges (see § 2.2).
Following § 1’s principle II, the PU recurrently transforms the original p[a]0 [to a series of pointers]
{p[a]t [}][l]t[( ˆ]=1Y ) [suitable for the current task][ without using input data][. This prevents unseen testing inputs]
disturb PU’s transformations. In the copy example, an ideal arithmetic transformation ensure p[a]0 [=]
pB and p[a]t+1 [=][ p]t[a] [+ 1][, which performs perfectly for any sequence whose length][ ≤] [2][b][. We aim to]
learn PU to automatically discover pointer manipulation rules from the task. As the rules are learned,
generalization is achieved even when the testing sequence is longer or contains novel items.
2.3.2 Pointer-based Addressing Modes
Mode 1 In this mode, the content from memory M is retrieved directly by dereferencing pointers. To
dereference pointer p[a]t [, we utilize the][ A][ −] [M][ binding and the address attention weight][ w]t[a][, retrieving]
the pointer value associated with p[a]t [as][ ∗][p]t[a] [=][ M][w]t[a][. Through this dereference operator, we can ac-]
cess to arbitrary data in the memory M without relying on the content of the memory. This property
enables robustness in handling new sequence when the memory content is novel and the process
stays the same (e.g., copy a never-seen-before sequence). Accessing M indirectly via A allows more
memory slots to be added to M during inference without affecting the processing rule as long as the
PU can transform the pointer variable correctly. During training, PU experiences pointer variables
covering the whole address space because the base address is sampled randomly. Hence, it is possible for PU to correctly transform pointers that point to extended addresses of a growing memory
as long as the pointer manipulation rule does not change. The address attention can be used for
multiple pointer variables. In this case, there would be multiple pointer units {PUh}[H]h=1[a] [responsible]
H[a] H[a]
for several p[a]t,h ∗pat,h
h=1 [and] h=1 [where][ H] [a][ is the number of attention heads. These pointer]
values will be used by the Controller for other memory reading.n o n o
Mode 2 This mode uses a more complicated memory access mechanism to capture relations between
pointers in complicated reasoning tasks. The accessed content is not the one associated with the
current pointer, but those whose contents are related to the current pointer’s value. As an example,
selection sort algorithm may require comparing items in a list with the Mode-1 pointer’s item to
select the greater one. We simulate that using attention with the query as the current pointer value:
-----
Figure 2: Exemplar results on 2 algorithms. (a, b) Test accuracy (mean ± std) over 5 runs on Copy
and ID Sort on each length test, respectively. Random predictor would reach around 10% accuracy.
(c,d) Visualization of data and pointer’s slots for Copy and ID Sort, respectively.
qt = gϕ[c] ∗pat,h Hh=1[a] ;
qtM[n]
wt[c] [[][n][] = softmax]
∥qt∥∥M[n]∥
(4)
(5)
∗pct [=][ M][w]t[c] (6)
H[a]
Here, the pointer attention takes the concatenated values ∗pat,h
h=1 [as input, transforms them to a]
query qt using a feed-forward neural network g[c] (·), and returns the related pointer valuen o [∗]p[c]t [through]
attention mechanisms on M. Intuitively, the Pointer Unit manipulates the Mode-1 pointer p[a]t [such]
that it retrieves the desired content pointed by the Mode-2 pointer p[c]t [. We can also have multi-head]
H[c]
attention, which results in ∗pct,h
n oh=1 [where][ H] [c][ is the number of attention heads.]
2.3.3 The Controller
The Controller Ctrl is responsible for decoding the memory to produce outputs. Unlike other
methods, we have pointer-based memory access to provide the controller with symbol-processing
information. In particular, at the t-th step of the decoding, Ctrl takes the pointer values (mode 1
and 2) as input together with an optional decoding input (zt = yt−), and uses a GRU to recurrently
produce the hidden state h[c]t [as follows,]
h[c]t [=][ GRU][λ] ∗pat,h Hh=1a [,] ∗pct,h Hh=1c [, z][t], h[c]t−1 (7)
where the hidden state h[c]0 [is initialized as]h [ P]i [M][[][i][]][ and] [ λ][ is the parameters of]i [ Ctrl][. The][ GRU][ handles]
content-based input, empowered with pointer information to construct rich hidden state representations. Furthermore, the pointer-based data gives the GRU access to correct items in the sequence even
when the memory content becomes different from the training due to encountering novel sequence
length.
The Controller Ctrl uses the pointer values (mode 1), the related pointer values (mode 2) and the
hidden state h[c]t [to make the final prediction. It simply concatenates them and forward to the][ g][o][ (][·][)][–a]
MLP, to generate the output token
yˆt[i] [∼] [p][Φ][(][y][t][|][X][i][, z][t][) =][ g]λ[o] ∗pat,h Hh=1[a] [,] ∗pct,h Hh=1[c] [, h]t[c]
i
The pointer values allow g[o] to fully utilize pointer information in producing the final output.h Φ
consists of the parameters of the Encoderθ, Pointer Unit PUϕ and Controller Ctrlλ. Ctrl can
be put on top of another decoder to process the decoding input zt. In some experiments, we use
Transformer as the decoder (see Appendix C and D.3). A summary of PANM’s operation is given
in Algo. 1 in Appendix.
-----
Task Copy Reverse Mix D. Recall P. Sort ID Sort
Other Max 60.2 63.6 64.0 47.6 60.6 42.1
PANM (Ours) 74.8 73.6 81.2 52.8 67.8 59.2
Table 1: Algorithmic reasoning: mean sequence-level accuracy (%) over testing lengths Other Max
is selected as the best numbers at each length mode from other baselines.
TRM+RPE 20 12 31 61 100 100 100 94 100 100 100 91 0
SCAN (L cut-off) Math
Task
22 24 25 26 27 28 30 32 33 36 40 a.s p.v
U. TRM+RPE 20 12 71 100 100 100 100 100 100 100 100 97 75
U. TRM 2 5 14 21 26 0 6 35 0 0 0 94 20
TRM 0 4 19 29 30 8 24 36 0 0 0 89 12
PANM (Ours) 22 47 100 100 100 100 100 100 100 100 100 97 86
Table 2: SCAN (Left): Exact match accuracy (%, median of 5 runs) on splits of various lengths.
Mathematics (Right): mean accuracy over 5 runs. The baselines’ numbers are from Csord´as et al.
[2021] and we run PANM using the authors’ codebase.
3 Experimental Results
In our experiments, we use two pointer variables in Mode-1 access and one for Mode-2 to balance
between performance and computing cost (H [a] = 2, H [c] = 1, see more in Appendix C). The two
Mode-1 pointer variables are initialized as the base and end addresses. All MLPs in PANM have 1
hidden layer of 128 dimensions. We use 256-dimensional GRUs for PU and Ctrl. The memory’s
address space has b = 10 bits, corresponding to a maximum of 1024 unique addresses, which is
greater than any sequence length in the experiments.
In §3.1-3.3, our chosen tasks are representative of various types of symbolic reasoning and wellknown benchmarks to measure the symbol-processing capability of ML models. To showcase that
these tasks are non-trivial, we report how Chat-GPT failed on our tasks in Appendix D.6. We
validate the contribution of our methods in other practical tasks in §3.4. We also explain the choice
of competing baselines in Appendix D.
3.1 Algorithmic Reasoning
In our first experiment, we study the class of symbol processing problems where an output sequence
is generated by a predefined algorithm applied to any input sequence (e.g., copy and sort). The
tokens in the sequences are symbols from 0 to 9. The input tokens can be coupled with meta
information related to the task such as the priority score in Priority Sort task. During training,
the input sequences have length up to L tokens and can grow to L + 1, 2(L + 1), 4(L + 1) or
8(L + 1) during testing. Our setting is more challenging than previous generalization tests on
algorithmic reasoning because of four reasons: (1) the task is 10-class classification, harder than
binary prediction in Graves et al. [2014], Le and Venkatesh [2022], (2) the testing data can be eight
time longer than the training and the training length is limited to L ≈ 10, which is harder than
Grefenstette et al. [2015], (3) there is no curriculum learning as in Kurach et al. [2015], and (4) the
training label is the one-hot value of the token, which can be confusing in case one token appears
multiple times in the input sequence and tougher than using label as the index/location of the token
as in Vinyals et al. [2015].
Here, we design several tasks. Content-free tasks involve permuting tokens in input sequence using
certain position-based rules: First-In-First-Out (Copy), Last-In-First-Out (Reverse) and Mix. While
the first two rules demand linear pointer manipulations (traverse the sequence from the head or tail,
to output the target token), the last one uses a non-linear, length-dependent manipulation rule: if t is
odd, yt = x⌈ L2 [⌉][; if][ t][ is even,][ y][t][ =][ x][1][.][ Content-based tasks][ need the input’s token value together]
with symbol processing to arrange the output sequence. We introduce 3 tasks: Dynamic Recall,
Priority Sort and ID Sort. Readers can find the details of these tasks in Appendix D.1.
Baselines are categorized into 4 groups: (1) Traditional RNNs such as LSTM
[Hochreiter and Schmidhuber, 1997], (2) Sequential attention models: Content Attention
-----
Figure 3: (a) Dyck: mean ± std. accuracy over 5 runs with different testing lengths. (b) Machine
translation task: Perplexity on Multi30K dataset (the lower the better). We sort the sequences in the
data by length and create 2 settings using train/test split of 0.8 and 0.5, respectively. The baselines
are Transformer and PANM. Left: The best test perplexity over 2 settings for different number of
Transformer’s layers (1 to 3 layers). Right: an example of testing perplexity curves over training
epochs for the case of 0.5 train/test split (2 layers) where we run 3 times and report the mean±std.
The y-axis is visualized using log scale.
[Bahdanau et al., 2014], Location Attention [Luong et al., 2015], Hybrid Attention (our baseline
concatenates the attention vectors from content and location attention), (3) MANNs such as NTM
[Graves et al., 2014], DNC [Graves et al., 2016], Neural Stack [Grefenstette et al., 2015] and
Transformer [Vaswani et al., 2017], and (4) pointer-aware models: NRAM [Kurach et al., 2015],
PtrNet [Vinyals et al., 2015], ESBN [Webb et al., 2020] and our method PANM. In this synthetic
experiment, we adopt LSTM as the encoder for PANM. All baselines are trained with fixed number
of steps (100K for ID Sort and 50K for the rest), which is enough for the training loss to converge.
For each task, each baseline is trained 5 times with different random seeds and we use the best
checkpoint on L + 1 mode validation to evaluate the baselines.
Results We report the average accuracy across different testing length for each task in Table 1. Overall, PANM significantly outperforms the best competitors ranging from 10-20% per task. Compared
with individual baselines, the improvement is much higher (Appendix D.1). We illustrate how the
pointer manipulation works for Copy and ID Sort in Fig. 2 (c) and (d). In Copy, only Mode-1 access
is needed. As decoding step t increases, Pointer Unit generates p[a]t [following the increment of the]
addresses as expected. In ID Sort, both Mode-1 and 2 are needed. The Pointer Unit generates p[a]t
incrementally to trace the input tokens from left to right (Mode 1). Then, the Mode-2 pointer p[c]t
is computed via attention to discover token with the same id, which will be the output at step t.
Without Mode-2 access, PANM certainly fails this task. Experiments with varied number of heads
are in Appendix D.5.
3.2 Dyck Language Recognition
Truly understanding the hidden law of context-free grammars such as Dyck (Dn) is challenging for
neural networks, even those with memory and attention [Yu et al., 2019]. The language consists of
strings with balanced pairs of brackets of different types (|1, |2,...,|n), generated by the following
rules: S →|iS|i with probability p/n or SS with probability q or ǫ with probability 1 − p − q.
Here, p, q are parameter of the language and ǫ is equivalent to EOS token. We follow the sequence
prediction task and datasets in Suzgun et al. [2019] where the input is an unfinished Dyck string,
and the output is the set of possible brackets for the next token, e.g., for D2, ([] → ( or ) or [. We
follow the authors to enable set prediction by representing output yt as a multi-hot vector.
We adapt PANM to this autoregression task by masking M to ensure the decoder only see tokens up
to the current decoding step. Since the token set is simple, we do not need to use any encoder, i.e.,
raw input tokens are stored in M. The SOTA baseline in this task is SRNN [Suzgun et al., 2019], an
autoregressive model using stack as memory. We use this model as the decoder to make the setup of
PANM close to SRNN. The only difference is that PANM has Pointer-Based Memory Access (Fig.
1 (b)). To make the task more challenging, we limit the maximum training length L to 10 (D2) and
20 (D3) tokens, and the testing lengths are L + 2, 2L, 4L, 8L. We choose L as minimum numbers
such that the model can perform decently on training data. The standard training and testing sizes
are 5000. We train the models for 5 epochs and evaluate them on the training data at the end of each
epoch to save model checkpoints. We use the best checkpoints for generalization test.
-----
Split
Model
0.8-0.2 0.5-0.5
Transformer 0.79 ± 0.01 0.76 ± 0.01
U. TRM+ RPE 0.80 ± 0.02 0.75 ± 0.01
PANM (Ours) 0.85 ± 0.02 0.81 ± 0.03
Table 3: bAbI QA: Testing accuracy (mean ± std.) over 5 runs.
Fig. 3 (a) reports the models’ accuracy for D2 and D3. Under our extreme setting, SRNN generalization fades out quickly as test lengths increase, especially for D3 whereas PANM performance
degradation happens at a much slower rate, outperforming SRNN by around 20% on average in both
tasks at any test lengths.
3.3 Compositional Learning
SCAN In this task, one needs to map an input sentence into an output sequence of commands
[Lake and Baroni, 2018]. The sequences are compositional, consisting of reusable parts. For example, in one case, “jump twice” should be mapped to “JUMP JUMP” and in another, “walk twice”
becomes “WALK WALK”. We focus on the “length split” datasets where the training sequences are
shorter than the test ones with 11 length modes L = 22, 24, .., 40 [Newman et al., 2020]. We adopt
the benchmark, training procedure and baselines prepared by Csord´as et al. [2021], which achieves
strong results under standard s2s learning. Here, our aim is not to break SOTA, which can be achieve
by hybrid-symbolic architectures [Chen et al., 2020, Shaw et al., 2021]. Instead, we focus on improving Transformer generalization in this task, hence the baselines are chosen as several variants of
Transformers (TRM) targeted to sequence extrapolation, including those using Relative Positional
Encoding (RPE [Dai et al., 2019]) and Universal Transformer (U. TRM [Dehghani et al., 2018]),
which is an advanced Transformer variant that recurrently processes each token, and can dynamically adjust the number of processing steps. Following Csord´as et al. [2021], each baseline is trained
5 times for 50K steps and the resulting model after training is used for evaluation (no validation).
Here, we use Transformer as the Encoder, which is the same as the TRM, and stack the Controller to
another Transform decoder (see details in Appendix D.3). Hence, the only difference is the decoding
where PANM leverages pointer manipulation.
Table 2 shows that PANM outperforms other baselines in the hardest settings when the training
length is up-to 22, 24, and 25. For 22 and 24 cases, general models like PANM cannot show perfect
generalization because some testing compositions is entirely discarded from the train set. In easier
settings, PANM shares the perfect median accuracy with the sophisticated U. TRM + RPE although
it does not use RPE. Remarkably, despite sharing the same encoder, TRM performs much worse
than PANM and even fails to learn in easy modes (33, 36, 40), indicating the importance of pointer
handling in this testbed. One problem for other baselines is the EOS decision (when to generate
ending token), which requires length tracking [Newman et al., 2020]. As they do not have contentfree sequence iteration mechanisms, it is extremely hard to trace the length without overfitting to
the training data. On the other hand, PANM can hypothetically generate pointer incrementally and
capture the difference between the last and the first pointers, i.e. the input length, and infer the
output sequence length based on that information.
Mathematical Problems We test our model on mathematics [Saxton et al., 2018]
where the input/output are questions and answers about math and each token is
a character. For example, What is − 5 − 110911? →−110916 (add or sub) and
What is the hundreds digit of 31253? → 2 (place value). The task requires not only
math reasoning, but also natural language understanding. We follow the training from Csord´as et al.
[2021] to conduct experiments on 2 subsets: add or sub (a.s) and place value (p.v), and
compare our method with Transformer-based baselines. Here, we focus on the extrapolating test set
involving larger numbers, more numbers, more compositions, and thus longer input sequences than
the training. We use TRM + RPE as the Encoder and the Controller is added to a normal TRM
decoder. As shown in Table 2, on place value, PANM does not suffer from performance crash
as TRM + RPE (0% test accuracy, as admitted in the paper [Csord´as et al., 2021] even though it
uses the same encoder). PANM achieves similar results as U. TRM+ RPE on add or sub while
outperforming it by 11% on place value. We also examine PANM with the original Transformer
and report additional results in Appendix D.3.
-----
Split
Model 0.8-0.2 0.5-0.5
F1 EM F1 EM
BERT 0.77 0.64 0.73 0.59
PANM (Ours) 0.78 0.65 0.76 0.61
Table 4: SQUAD 1.1: Testing accuracy after 3 epoch fine-tuning. F1 score and exact match (EM)
follows the standard evaluation in Kenton and Toutanova [2019].
3.4 Other NLP Tasks
Question Answering Our objective is to explore the PANM’s generalization beyond obviously compositional data by applying it in a more practical setting of question answering. For this purpose,
we utilize two datasets, namely bAbI [Weston et al., 2015] and SQUAD 1.1 [Rajpurkar et al., 2016]
where the input sequence is a context paragraph and a question, and the output is the answer. To
add complexity to the task, we ensure the length of test sequence is greater than that of the training by sorting the context paragraph by length and splitting the sorted data into 0.8/0.2 and 0.5/0.5
ratio. Details of the data/task are in Appendix D.4. In bAbI, we configure the PANM similarly to
the one described in § 3.3 using Transformer backbone, and test the models after 100-epoch training. The models predict the answer tokens given the context and question tokens. As shown in
Table 3 and Appendix Fig. 5 (right), PANM helps Transformer generalize better, consistently improving around 6% and 5% using 0.8/0.2 and 0.5/0.5 splits, respectively. Notably, PANM’s testing
loss is not diverged quickly as Transformer’s, indicating PANM’s capability of reducing overfitting. In SQUAD, we use BERT as the backbone to predict the start and the end of the answer
as in Kenton and Toutanova [2019]. PANM-assisted model outperforms the baselines by 1% and
2% exact match accuracy, respectively (Table 4). The improvement is significant as BERT is a big
foundation model already pretrained with big data and robust against novel test data.
Machine Translation Here, we want to verify the PANM in machine translation and show that
PANM can work with different number layers of Transformer. The results are presented in Fig. 3
(b) where we report the model perplexity on Multi30K (en-de) dataset. The 30K-sample dataset is
sorted by input length and split into training and testing s.t. testing sequences are longer, similar to
QA task. The results demonstrate PANM can consistently improve the generalization performance
of Transformer across different split ratios and the number of encoder/decoder layers.
4 Related works
There are many attempts to augment neural networks with external memory (MANN) to improve
their symbol-processing ability. Pioneers such as NTM [Graves et al., 2014] and DNC [Graves et al.,
2016] propose computer-like memory read/write operations with content-based attention mechanisms, and thus in principle, can execute any symbolic rules. However, learning the hidden law
end-to-end from sequence data is extremely difficult. Therefore, MANNs including Transformers
[Vaswani et al., 2017], may fail miserably in out-of-distribution testbeds, especially length extrapolation [Del´etang et al., 2022]. Recent LLMs are good at reasoning and generalization, but bad at
symbolic processing [Qian et al., 2023, Tang et al., 2023]. We use LLMs only to show our task
difficulty (Appendix D.6), not as a baseline, because they are not on the same scale as our method.
Many recent works advocate the use of specialized memory architectures such as stacks
[Grefenstette et al., 2015, Hao et al., 2018, Suzgun et al., 2019], key-value memory [Webb et al.,
2020, Le et al., 2020a] and improved attentions [Kurach et al., 2015, Russin et al., 2019,
Dubois et al., 2020]. These methods employ different inductive biases in designing the memory
and attention, yet not following the two principles advocated by our paper. Although they may work
remarkably on certain synthetic tasks, they are not examined on various benchmarks or compatible with different sequential backbones. Other orthogonal approaches focus on model initialization
[Zhang et al., 2019], data augmentation [Andreas, 2020] or training details [Csord´as et al., 2021].
Besides differentiable models, there are major progress in compositional rule learning that leverage
neuro-symbolic architectures [Nye et al., 2020, Shaw et al., 2021, Chen et al., 2020] or reinforcement learning [Liu et al., 2020]. We have not compared our model with these task-specific methods,
as our focus is on improving the systematic generalization of fundamental differentiable models.
-----
Our approach is mainly related to key-value memory because the address bank can be viewed as the
key and the data memory as the value. However, the key in other works is either learned through
backpropagation [Le et al., 2020a] or computed based on the input data [Webb et al., 2020]. In
contrast, our “keys” are generated as fixed numbers (physical memory addresses– § 1’s principle I),
which is totally separated from the data and extendable to longer sequences. We argue that using
addresses as keys is critical to symbol processing because it explicitly allows pointer assignment,
dereference and arithmetic. A related generalization-enable scheme is to design positional encoding
of tokens in a sequence [Vaswani et al., 2017, Dai et al., 2019, Li and McClelland, 2022]. Unlike
these approaches, our physical addresses are detached from the data to support transforming pointers
through time steps and isolating pointer manipulation from the input.
5 Discussion
We introduce a neural memory model called PANM that manipulates pointers to learn symbol processing rules for better length extrapolation. PANM isolates symbols from data and uses an address
bank to allow data-isolated pointer manipulation through address attention. PANM consistently outperforms strong baselines in tasks such as algorithm mining, compositional learning, mathematics
reasoning, context-free grammar recognition, and practical NLP tasks, even when the test sequence
is much longer than the training sequence. Reproducibility In the Appendix, we included our detailed model descriptions, algorithms, implementations, hyperparameters to replicate the results of
our experiments. Source code will be released when the paper is published.
Impact Statements In this work, we used the publicly available datasets for experiments. We did
not collect human or animal data during this study. Our work aims to improve the generalization of
sequential models. This aim is genuine, and we do not think there are immediate harmful consequences. However, we are aware of potential problems if our method is used to augment language
models to generate texts that are hallucinated or contain negative contents. This issue is typical for
plug-and-play modules like PANM, and we will do our best to prevent it from our end.
References
Jacob Andreas. Good-enough compositional data augmentation. In Proceedings of the 58th Annual
Meeting of the Association for Computational Linguistics, pages 7556–7566, 2020.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly
learning to align and translate. arXiv preprint arXiv:1409.0473, 2014.
Dzmitry Bahdanau, Shikhar Murty, Michael Noukhovitch, Thien Huu Nguyen, Harm de Vries,
and Aaron Courville. Systematic generalization: What is required and can it be learned? In
International Conference on Learning Representations, 2018.
Xinyun Chen, Chen Liang, Adams Wei Yu, Dawn Song, and Denny Zhou. Compositional generalization via neural-symbolic stack machines. Advances in Neural Information Processing Systems,
33:1690–1701, 2020.
Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of
gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
R´obert Csord´as, Kazuki Irie, and J¨urgen Schmidhuber. The devil is in the detail: Simple tricks
improve systematic generalization of transformers. arXiv preprint arXiv:2108.12284, 2021.
Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G Carbonell, Quoc Le, and Ruslan Salakhutdinov.
Transformer-xl: Attentive language models beyond a fixed-length context. In Proceedings of the
57th Annual Meeting of the Association for Computational Linguistics, pages 2978–2988, 2019.
Mostafa Dehghani, Stephan Gouws, Oriol Vinyals, Jakob Uszkoreit, and Lukasz Kaiser. Universal
transformers. In International Conference on Learning Representations, 2018.
Gr´egoire Del´etang, Anian Ruoss, Jordi Grau-Moya, Tim Genewein, Li Kevin Wenliang, Elliot Catt,
Marcus Hutter, Shane Legg, and Pedro A Ortega. Neural networks and the chomsky hierarchy.
arXiv preprint arXiv:2207.02098, 2022.
-----
Yann Dubois, Gautier Dagan, Dieuwke Hupkes, and Elia Bruni. Location attention for extrapolation to longer sequences. In Proceedings of the 58th Annual Meeting of the Association for
Computational Linguistics, pages 403–413, 2020.
Aaron Eisermann, Jae Hee Lee, Cornelius Weber, and Stefan Wermter. Generalization in multimodal
language learning from simulation. In 2021 International Joint Conference on Neural Networks
(IJCNN), pages 1–8. IEEE, 2021.
Tong Gao, Qi Huang, and Raymond Mooney. Systematic generalization on gscan with language
conditioned embedding. In Proceedings of the 1st Conference of the Asia-Pacific Chapter of
the Association for Computational Linguistics and the 10th International Joint Conference on
Natural Language Processing, pages 491–503, 2020.
Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. arXiv preprint
arXiv:1410.5401, 2014.
Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka GrabskaBarwi´nska, Sergio G´omez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou,
et al. Hybrid computing using a neural network with dynamic external memory. Nature, 538
(7626):471–476, 2016.
Edward Grefenstette, Karl Moritz Hermann, Mustafa Suleyman, and Phil Blunsom. Learning to
transduce with unbounded memory. Advances in neural information processing systems, 28,
2015.
Yiding Hao, William Merrill, Dana Angluin, Robert Frank, Noah Amsel, Andrew Benz, and Simon
Mendelsohn. Context-free transductions with neural stacks. EMNLP 2018, page 306, 2018.
Sepp Hochreiter and J¨urgen Schmidhuber. Long short-term memory. Neural computation, 9(8):
1735–1780, 1997.
Jennifer Hu, Jon Gauthier, Peng Qian, Ethan Wilcox, and Roger P Levy. A systematic assessment
of syntactic generalization in neural language models. arXiv preprint arXiv:2005.03692, 2020.
Jacob Devlin Ming-Wei Chang Kenton and Lee Kristina Toutanova. Bert: Pre-training of deep
bidirectional transformers for language understanding. In Proceedings of NAACL-HLT, pages
4171–4186, 2019.
Asjad Khan, Hung Le, Kien Do, Truyen Tran, Aditya Ghose, Hoa Dam, and Renuka Sindhgatta.
Deepprocess: supporting business process execution using a mann-based recommender system.
In Service-Oriented Computing: 19th International Conference, ICSOC 2021, Virtual Event,
November 22–25, 2021, Proceedings 19, pages 19–33. Springer, 2021.
Trenton E. Kriete, David C. Noelle, Jonathan D. Cohen, and Randall C. O’Reilly. Indirection and
symbol-like processing in the prefrontal cortex and basal ganglia. Proceedings of the National
Academy of Sciences, 110:16390 – 16395, 2013.
Karol Kurach, Marcin Andrychowicz, and Ilya Sutskever. Neural random-access machines. arXiv
preprint arXiv:1511.06392, 2015.
Brenden Lake and Marco Baroni. Generalization without systematicity: On the compositional skills
of sequence-to-sequence recurrent networks. In International conference on machine learning,
pages 2873–2882. PMLR, 2018.
Brenden M Lake. Compositional generalization through meta sequence-to-sequence learning. Advances in neural information processing systems, 32, 2019.
Hung Le and Svetha Venkatesh. Neurocoder: General-purpose computation using stored neural
programs. In International Conference on Machine Learning, pages 12204–12221. PMLR, 2022.
Hung Le, Truyen Tran, and Svetha Venkatesh. Dual memory neural computer for asynchronous
two-view sequential learning. In Proceedings of the 24th ACM SIGKDD international conference
on knowledge discovery & data mining, pages 1637–1645, 2018.
-----
Hung Le, Truyen Tran, and Svetha Venkatesh. Neural stored-program memory. In International Conference on Learning Representations, 2020a. URL
[https://openreview.net/forum?id=rkxxA24FDr.](https://openreview.net/forum?id=rkxxA24FDr)
Hung Le, Truyen Tran, and Svetha Venkatesh. Self-attentive associative memory. In Proceedings of
the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine
Learning Research, pages 5682–5691, Virtual, 13–18 Jul 2020b. PMLR.
Yuxuan Li and James McClelland. Systematic generalization and emergent structures in transformers trained on structured tasks. In NeurIPS ’22 Workshop on
All Things Attention: Bridging Different Perspectives on Attention, 2022. URL
[https://openreview.net/forum?id=BTNaKmYdQmE.](https://openreview.net/forum?id=BTNaKmYdQmE)
Qian Liu, Shengnan An, Jian-Guang Lou, Bei Chen, Zeqi Lin, Yan Gao, Bin Zhou, Nanning Zheng,
and Dongmei Zhang. Compositional generalization by learning analytical expressions. Advances
in Neural Information Processing Systems, 33:11416–11427, 2020.
Minh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attentionbased neural machine translation. arXiv preprint arXiv:1508.04025, 2015.
Benjamin Newman, John Hewitt, Percy Liang, and Christopher D. Manning. The
eos decision and length extrapolation. In BlackBoxNLP@EMNLP, 2020. URL
[https://nlp.stanford.edu/pubs/newman2020extrapolation.pdf.](https://nlp.stanford.edu/pubs/newman2020extrapolation.pdf)
Maxwell Nye, Armando Solar-Lezama, Josh Tenenbaum, and Brenden M Lake. Learning compositional rules via neural program synthesis. Advances in Neural Information Processing Systems,
33:10832–10842, 2020.
Jing Qian, Hong Wang, Zekun Li, Shiyang Li, and Xifeng Yan. Limitations of language models in
arithmetic and symbolic induction. In Anna Rogers, Jordan L. Boyd-Graber, and Naoaki Okazaki,
editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023, Toronto, Canada, July 9-14, 2023, pages 9285–9298.
Association for Computational Linguistics, 2023. doi: 10.18653/v1/2023.acl-long.516. URL
[https://doi.org/10.18653/v1/2023.acl-long.516.](https://doi.org/10.18653/v1/2023.acl-long.516)
Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for
machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in
Natural Language Processing, pages 2383–2392, 2016.
Jake Russin, Jason Jo, Randall C O’Reilly, and Yoshua Bengio. Compositional generalization in a
deep seq2seq model by separating syntax and semantics. arXiv preprint arXiv:1904.09708, 2019.
David Saxton, Edward Grefenstette, Felix Hill, and Pushmeet Kohli. Analysing mathematical reasoning abilities of neural models. In International Conference on Learning Representations,
2018.
Peter Shaw, Ming-Wei Chang, Panupong Pasupat, and Kristina Toutanova. Compositional generalization and natural language variation: Can a semantic parsing approach handle both? In
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the
11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers),
pages 922–938, 2021.
Ray J Solomonoff. Algorithmic probability, heuristic programming and agi. In 3d Conference on
Artificial General Intelligence (AGI-2010), pages 57–63. Atlantis Press, 2010.
Mirac Suzgun, Sebastian Gehrmann, Yonatan Belinkov, and Stuart M Shieber. Memoryaugmented recurrent neural networks can learn generalized dyck languages. arXiv preprint
arXiv:1911.03329, 2019.
Xiaojuan Tang, Zilong Zheng, Jiaqi Li, Fanxu Meng, Song-Chun Zhu, Yitao Liang, and Muhan
Zhang. Large language models are in-context semantic reasoners rather than symbolic reasoners.
arXiv preprint arXiv:2305.14825, 2023.
-----
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. Advances in neural information processing systems, 28, 2015.
John Von Neumann. First draft of a report on the edvac. IEEE Annals of the History of Computing,
15(4):27–75, 1993.
Taylor Whittington Webb, Ishan Sinha, and Jonathan Cohen. Emergent symbols through binding in
external memory. In International Conference on Learning Representations, 2020.
Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart Van Merri¨enboer, Armand
Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy
tasks. arXiv preprint arXiv:1502.05698, 2015.
Bo Wu, Haoyu Qin, Alireza Zareian, Carl Vondrick, and Shih-Fu Chang. Analogical reasoning for
visually grounded language acquisition. arXiv preprint arXiv:2007.11668, 2020.
Greg Yang. Lie access neural turing machine. arXiv preprint arXiv:1602.08671, 2016.
Xiang Yu, Ngoc Thang Vu, and Jonas Kuhn. Learning the dyck language with attention-based
seq2seq models. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 138–146, 2019.
Biao Zhang, Ivan Titov, and Rico Sennrich. Improving deep transformer with depth-scaled initialization and merged attention. In Proceedings of the 2019 Conference on Empirical Methods in
Natural Language Processing and the 9th International Joint Conference on Natural Language
Processing (EMNLP-IJCNLP), pages 898–909, 2019.
-----
Appendix
A More Discussion on Related Works
The proposed address attention in our paper is comparable to two known mechanisms: (1) locationbased attention [Luong et al., 2015, Dubois et al., 2020] and (2) memory shifting [Graves et al.,
2014, Yang, 2016]. The former uses neural networks to produce attention weights to the memory/sequence, which cannot help when the memory grows during inference since the networks never
learn to generate weights for the additional slots. Inspired by Turing Machine, the latter aims to shift
the current attention weight associated with a memory slot to the next or previous slot. Shifting-like
operations can handle any sequence length. However, it cannot simulate complicated manipulation
rules. Unlike our PU design which obeys § 1’s principle II, the attention weight and the network
trained to shift it depend on the memory content M. That is detrimental to generalization since new
content can disturb both the attention weight and the shifting network as the memory grows.
Another line of works tackles systematic generalization through meta-learning training [Lake, 2019],
while our method employs standard supervised training. These approaches are complementary, with
our method concentrating on enhancing model architecture rather than training procedures, making
it applicable in diverse settings beyond SCAN tasks. Additionally, the study by Hu et al. (2020)
addresses syntactic generalization [Hu et al., 2020], a different problem compared to our paper,
which emphasizes length extrapolation across various benchmarks. Notably, our paper considers
similar baselines, such as LSTM and Transformer, as those examined in the referenced papers.
There are other lines of research targeting reasoning and generalization using image input[Wu et al.,
2020, Eisermann et al., 2021]. They are outside the scope of our paper, which specifically addresses
generalization for longer sequences of text or discrete inputs
Our address bank and physical pointers can be viewed as some form of positional encoding. However, we do not use simple projections or embeddings to force the attention to be position-only.
Instead, we aim to learn a series of transformations that simulate the position-based symbolic rules.
At each time step, a new pointer ("position") is dynamically generated that reflects the manipulation
rule required by the task (e.g. move to the next location), which is unlike the positional encoding
approaches such as RPE [Dai et al., 2019] which aims to provide the model with information on the
relative position or distance of the timesteps. We summarise the difference between our method and
Transformer in Table 5.
B More Discussion on Base Address Sampling Mechanism
We provide a simple example to illustrate how base address sampling help in generalization. Assume
the training sequence length is 10, and the desired manipulation is p[′] = p + 1 (copy task). Assume
the possible address range is 0, 1, ..., 19, which is bigger than any sequence length. If pB = 0, the
training address bank contains addresses: 0, 1, ...8, 9. Without base address sampling, the model
always sees the training address bank of 0, 1, ...8, 9 and thus can only learn manipulating function
for 0 ≤ p ≤ 9, thereby failing when testing address bank includes addresses larger than 9.
Thanks to base address sampling, at some point of training, pB = 10, the training address bank is
10, 11, ...13, 19. The manipulating function sees p > 9 and can learn to transform p[′] = p + 1 for
p > 9, e.g., transform p = 10 → p[′] = 11. The learning happens because the pointer’s value ([∗]p) is
used to predict the output sequence. The task loss will reward [∗]p that follows the rule, and update
the Pointer Unit such that it transforms the p following the rule to minimize the loss. During testing,
the input length can be 12, we set pB = 0 and the address bank is 0, 1, ...., 10, 11. The learned
manipulation can still be applied to new locations 10th, and 11th.
We can prove that the complexity of exposing all addresses to the model is practically small compared to the normal training. Assume the training input sequence length is L, and the number of
possible addresses is Lmax. Here, Lmax indicates the possible testing length that the model can
handle. When Lmax →∞, the expected number of samples required for exposing all addresses is
O (n log n) where n = Lmax/L (we can formulate this problem as Coupon collector’s problem).
For example, in even an extreme address range of Lmax = 10[6] (in practice we rarely need that big
range) to train input sequences of length 10, we only need to sample 10[5] log 10[5] sequences, which
is often smaller than the size of the training datasets. Empirically, in our experiments, we always
train our method with the same number of batch size and training steps as other baselines to ensure
-----
Difference Transformer PANM (Our)
Key Keys are computed based on input The keys in our approach are
Generation data. Hence, when meeting novel generated as fixed numbers,
data during testing, Transformer specifically physical memory
will observe novel keys, and cannot addresses. These keys are entirely
work properly. separate from the data.
Extendable The dimension of attention weights The fixed nature of our
to Longer varies with input length, making physical addresses allows
Sequences arithmetic transformations on our pointers to be easily
these attention weights infeasible manipulated and extendable
as the sequence length increases. to longer sequences.
Symbol The use of attention weights Using physical addresses as keys
Processing as implicit pointers may lack in our approach is crucial for symbol
Advantages the explicitness needed for processing as it explicitly allows
effective symbol processing. pointer assignment, dereference,
and arithmetic operations.
Physical Address Positional encoding can be generated Our physical addresses are
vs Positional independently from data. However, detached from the data, supporting
Encoding they are not separated from the input the transformation of pointers
data as our physical addresses. There through timesteps and isolating
is no explicit mechanism in pointer manipulation from
Transformer to attend only to these the input.
positional encodings or to transform
pointers to point to these positional
encodings from one step to another.
Table 5: PANM vs Transformer
a fair comparison, and we realize that it is always possible to expose all the addresses to our model
during training.
C More Discussion on Model and Architecture
H[a]
We can see that it is critical to have H [a] ≥ 2 and pB, pE ∈ p[a]0,h
h=1 [to achieve generalization]
H[a]
n o
using pointer arithmetic. In other words, if pB, pE /∈ p[a]0,h
h=1[, we can always find a task to make]
H[a]
n o
PANM fail to generalize. For example, if pE /∈ p[a]0,h
h=1[, PANM cannot generalize in Reverse]
task. To see that, without loss of generality, we assume PANM only learn to produce the last tokenn o
H[a]
at address pE using information from some initial addresses p[′] ∈ p[a]0,h
h=1 [such that][ p][′][ ̸][=][ p][E][.]
During training, the learned pointer arithmetic to perform Reverse at the first step of the decodingn o
to produce value y1 can only be a function of p[′]: y1 = [∗]pE =[∗] f (p[′]), that is, pE = f (p[′]). During
testing, pE can receive arbitrary value, so for whatever learned f, we can always find a test sequence
such that pE ̸= f (p[′]) ∀f because pE ̸= p[′]. A similar argument can be used for pB and Copy task.
In the main manuscript, we only experiment with 1 Mode-2 pointer (H [c] = 1). If H [c] = 0, obviously
PANM will fail in tasks such as ID Sort. Using more H [c] and H [a] can still be beneficial in exchange
for slower computation (see Appendix D.5). In all experiments, we use 256-dimensional GRUs
for the PU and Ctrl. The encoder and decoder (to stack the Controller on) can vary across tasks.
The general plug-and-play framework is illustrated in Fig. 4. We also summarize operations of our
model in Algo. 1.
D Experimental Details
All the datasets and public codebases use Apache or MIT License. We trained all the models using
a single GPU Tesla V100-SXM2. The running time of PANM depends on the Encoder and tasks.
Overall, with 2 Mode-1 pointers and 1 Mode-2 pointer, PANM’s speed will be 70-80% compared to
-----
**%DFNERQH**
**0RGHO**
|2873876(48(1&( 3$10 3$10&21752//(5 0(025< (1&2'(5 '(&2'(5,13876(48(1&(|2873876(48(1&(|Col3|
|---|---|---|
||||
|3$10 0(025<|3$10&21752//(5||
||||
|(1&2'(5|'(&2'(5||
||||
|,13876(48(1&(|||
Figure 4: PANM as a plug-and-play architecture. The encoder and decoder can be any model
(LSTM, Transformer or BERT). PANM Controller can be used as the last layer of the Decoder
to access the memory during decoding. To reduce the number of parameters of the augmented
architecture, the decoder’s number of layers can be decreased.
the backbone model. For example, in Copy task, PANM’s speed is 15 iterations/s while LSTM’s is
20 iterations/s. If PANM uses Transformer Encoder, its speed is 77 iterations/s while Transformer’s
is 90 iterations/s.
Baseline Choice Although our baselines are classic, they are still very strong baselines in our studied
tasks. For example, in our algorithmic reasoning, LSTM with attention or Pointer Networks are still
dominant baselines, outperforming the more recent Transformer. In Dyck recognition, stack-based
models are still SOTA because their inductive bias is suitable for the task. Experiments in Sec.
3 adopt (Universal) Transformer+RPE, which is a recent and strong Transformer variant focusing
on generalization. There are also other sophisticated methods focusing generalization [Webb et al.,
2020].
In our experiments, PANM is ensured to have similar model size as the baselines and often built
on top of similar backbones for fair comparison. We believe it is still important to improve fundamental baselines such as Transformers or LSTM because they are the building blocks of many
practical applications including recent Large Language Models (LLMs). In this paper, we prove the
improvement of these fundamental blocks, and in future works, we will extend our ideas to more
advanced backbones such as LLMs.
D.1 Algorithmic Reasoning
We first give the details of the content-based tasks below.
In Dynamic Recall, an arbitrary input token is chosen as the query and is added to the end of the
input sequence. Depending on the length of the input, a.k.a, odd or even, the first target token will
be on the left or right of the query, following its succeeding tokens in the input. This task requires
both content matching (find query token in the input sequence) and position-based access (shift left
or right).
In Priority Sort, each input token is associated with a priority score sampled from the standard
normal distribution. The target output will be tokens from the input sequence sorted ascending
by their the score. This task can be solved in many ways and likely needs complicated symbol
processing such as looping through items in the sequence and comparing the score of tokens.
Finally, in ID Sort, each input token is augmented with an id feature vector sampled from standard
multivariate normal distribution such that every 2 tokens share one id. For example, with input
x1, x2, x3, x4, x1 and x4 may share one id while x2 and x3 shares another id. The pairing is chosen
randomly. The output token at position i-th will be the input token that share id with the i-th input
-----
token. The correct output for the earlier example is x4, x3, x2, x1. This task is specifically designed
to test the ability to learn Mode 2 pointer-based memory access.
In this task, we implement the baselines such as LSTM, attention models and Transformer using
Pytorch library. The hidden state dimension for these models are set to 512, which results in around
1-3 millions parameters. We tuned the number of layers of the encoder/decoder for these baselines
in Copy task, and realized that 1-layer gave the best performance. For NTM and DNC, we use public
repositories[1] with default controller’s hidden state of 256 dimensions and 128-slot external memory,
which results in around 1.2 millions parameters. We use the ESBN’s author codebase [2] with default
parameter setting, resulting in ≈1.2 million parameters. For PtrNet, since we do not use token index
as the training label, we produce the predicted token by performing weighted sum the input tokens
using the PtrNet’s attention weights. PtrNet’s hyperparameters are the same as attention models.
We could not find the authors’ code for Neural Stack and NRAM so we implemented them and
tuned hyperparameters for the Copy task at length L such that the model sizes are about 1.1 million
parameters. In this task PANM uses LSTM with hidden state of 256 as the Encoder and does not
stack the Controller on any decoder models, resulting in ≈1.1 million parameters.
Input: A dataset of sequence pairs D = {Xi, Yi}[N]i=1[data], initial Φ containing the parameters
of the Encoderθ, Pointer Unit PUϕ and Controller Ctrlλ, b representing the number
of bits of the address space, Ldec being the maximum number of decoding steps,
and function l measuing the length of a sequence.
Ouput: Φ[∗], trained parameters.
1 for {Xi, Yi} ∼ D do
/* Construct the memory */
2 M = Encoderθ(Xi)
/* Sample base address. During testing, pB can be set to 0 */
3 pB ∼ Uniform {0, 1}[b][]
/* Generate the address */
4 for j = 0, 1, ..., l(M) − 1 do
5 A[j] = (pB + j) mod 2[b]
6 end
/* Decode with pointers */
7 for t = 0, 1, ..., Ldec do
8 Use PUϕ and A to compute p[a]t [using Eq. 3]
9 Use M and p[a]t [to compute pointer values][ ∗][p]t[a] [(Mode 1) and][ ∗][p]t[c] [(Mode 2) (see][ §][2.3.2)]
10 Use Ctrλ and pointer values to compute pΦ(yt|Xi, zt) (see §2.3.3)
11 yˆt[i] [= argmax]yt [p][Φ][(][y][t][|][X][i][, z][t][)]
12 if ˆyt[i] [is EOS][ then]
13 break
14 end
15 end
/* Compute cross-entropy loss */
16 L = − [P]t [log][ p][Φ][(][y][t][ =][ Y][i][[][t][]][|][X][i][, y]t[i]−[)]
17 Use L to update Φ through backpropagation
18 end
Algorithm 1: PANM training. To simplify, we assume the batch size and number of pointer
heads of one.
In this experiment, all the models are trained without teacher forcing as in Graves et al. [2014], i.e,
the input to the decoder is zero (zt = 0). The detailed average accuracy (mean ± std.) of each
method together with the actual length of each testing mode are reported in Tables 7-12.
Overall, PANM observes significant improvement ranging from 10-20% on each task. We note that
when compared with individual baselines, the improvement is much higher. Consider Copy as an
example (Fig. 2a), PANM outperforms the worst baseline Transformer by around 60% at 2(L + 1)
and 30% at 4(L + 1), respectively. As stated earlier that our tasks are challenging, thus, originally
strong baselines such as NTM, DNC, and Neural Stack do not generalize well at extreme lengths,
[1https://github.com/thaihungle/SAM](https://github.com/thaihungle/SAM)
[2https://github.com/taylorwwebb/emergent_symbols](https://github.com/taylorwwebb/emergent_symbols)
-----
Figure 5: Dyck (Left): mean ± std. accuracy over 5 runs with different testing lengths. bAbI QA
(Right): mean ± std. testing accuracy and cross-entropy loss across 100 training epochs over 5 runs.
especially in ID Sort. ID Sort is trickier than content-free tasks, making some baselines fail at length
L even though it is in the training data. The best other model in this case is Content Attention, which
clearly underperforms our PANM from few % to 50% (Fig. 2b). Without curriculum learning and
under the 10-class prediction setting, methods that use implicit pointers, such as PtrNet, NRAM,
and ESBN, demonstrate mediocre performance on average when compared to PANM. Furthermore,
PANM also outperforms in length-dependent tasks (Mix, D. Recall), indicating that it can track
the sequence length in extrapolation. We hypothesize that PANM’s content–free pointer generation
mechanism to simulate list iteration makes it possible.
In Copy, only Mode-1 access is needed. As decoding step t increases, Pointer Unit generates p[a]t
following the increment of the addresses as expected. That said, for several steps, the address
attention is not sharp, showing other addresses pointed by the pointer, which is not surprising since
we use soft attention and it is hard for a neural network to learn the exact rule: p[a]t+1 [=][ p]t[a] [+ 1][. This]
problem gets worse as test length increases as the error accumulates, especially when the same token
can appear many times, which confuses the model. This explains why PANM’s performance drops
clearly in the hardest case 8(L + 1). Yet, it is still significantly better than others whose results are
near random prediction.
D.2 Dyck Language Recognition
In this task, we adopt the SRNN code from Suzgun et al. [2019][3] using the default parameters. As
explained in the main text, this task is auto-regression, hence, zt = ˆyt−1. PANM adopts SRNN (an
auto-regressive model) as the encoder and does not stack the Controller on any decoder models. The
result is visualized in Fig. 5 (left).
D.3 Conpositional Learning
In this task, we adopt the code from Csord´as et al. [2021][4] using the default parameters. When using
Transformer Encoder, we need to have Transformer-like decoder to align the token representation of
the encoding and decoding phases. As such, in SCAN, we utilize the 3-layer Transformer decoder,
replace its last layer by the Controller. Formally, zt in Eq. 7 becomes Decoder(yt−) where the
Decoder is a 2-layer Transformer. In Mathematics reasoning task, we use similar integration except
that the Encoder is Transformer with relative positional encoding (TRM + RPE). By reducing the
number of decoding layer, we ensure PANM’s hyperparameter number equivalent to that of the
Transformer baseline (12M). All models are trained with teacher forcing as in Csord´as et al. [2021].
SCAN The training size is 16990 and the test size is 3920. SCAN is a well-known and standard
benchmark for testing compositional learning and generalization in sequential models. One property
of this dataset is that a new length often contains new rules that must be captured by the model to
ensure generalization, and thus, if the model fails to learn a hidden rule, its performance may drop
significantly from one length split to another. Fig. 6 illustrates PANM’s testing accuracy curves
when L = 22, 24, 25, 26. Other learning curves for L > 26 looks similar to L = 26 where PANM
easily solves the task perfectly.
[3https://github.com/suzgunmirac/marnns](https://github.com/suzgunmirac/marnns)
[4https://github.com/RobertCsordas/transformer_generalization](https://github.com/RobertCsordas/transformer_generalization)
-----
Figure 6: SCAN: PANM’s exemplar learning curves.
Mathematical Problems Table 13 reports the accuracy with mean and standard deviation. Here, we
augment TRM and TRM+RPE with PANM. Both shows improvement, especially for TRM+RPE,
indicating that PANM is compatible with other methods designed to improve generalization in
Transformer.
D.4 Other NLP Tasks
The bAbI dataset consists of 20 synthetic tasks that evaluate various reasoning skills. To prepare
the data for each task, we combine train/valid/test into a single set and sort it by length and split it
into training and testing sets, as described in the main text. We train the models jointly on all 20
tasks and measure the accuracy of their answers, which are considered correct only if they match
the ground truth answer perfectly. The training/evaluation follows exactly the standard protocol
presented in [Le et al., 2020b]. The Transformer used here has 8 heads, 3 layers of encoding, 3
layers of decoding, and hidden dimensions of 512. PANM uses the same Transformer backbone
except that the decoder has 2 layers to make the model size equivalent. We run each model 5 times
to report the mean and standard deviation as in Fig. 5 (right). Table 3 reports the detailed numbers.
The SQUAD dataset contains more than 100K realistic context/question-answer pairs.
Again, we combine train/test into a single set and sort it by length and split into
new train/test sets. Following Kenton and Toutanova [2019], we use BERT model
[(https://huggingface.co/bert-base-uncased) to predict the start and end location of the](https://huggingface.co/bert-base-uncased)
answer, and finetune the model with the same setting (e.g., 3 epochs with a learning rate of 5e-5) except that our batch size is 16 to fit with our GPU. PANM appends the Controller to BERT to predict
the start and end. Both BERT and PANM have around 114 million parameters. Table 4 reports the
detailed numbers.
D.5 Additional Experiments
Pointer Hyperparameters In this section, we confirm the logic presented in Appendix C by performing experiments that involve varying the number and type of pointers.
Mode-1 Pointers We test the PANM version in § 3.1 with H [a] = 0, 1, 2, 3 on Copy, Reverse. We
do not use Mode-2 pointer here to avoid confusion (H [c] = 0). Fig. 7 plots the testing accuracy
over training time. As H [a] = 0, there is no pointer information for the Controller, PANM should be
equivalent to an GRU and fail to generalize. As H [a] = 1, the only pointer is initialized either with
the base or end address. As shown in Fig., PANM cannot generalize in both Copy and Reverse tasks
with single Mode-1 pointer, which is proved in Appendix C. In the caseH [a] = 3, we initialize them
with the base, end and middle addresses. We observe that increasing H [a] to 3 slightly reduces the
performance in these tasks. We speculate that too many Mode-1 pointers make the learning harder;
in particular, learning to manipulate the third pointer may interfere with that of the first or second
pointer, which are more important in these tasks. Generally, most tasks only require list iterations
from the head to the tail or vice versa. Hence, we keep H [a] = 2 in all experiments to save the
computation cost.
Mode-2 Pointers We fix H [a] = 2, and vary H [c] = 0, 1, 2 on Copy, Priority Sort, ID Sort. As shown
in Fig. 8, without Mode-2 pointers (H [c] = 0), generalization in Priority Sort and ID Sort is reduced
significantly by 50% and 30%, respectively because these tasks focus more on the content of the
input sequence and often demand comparing the content of different tokens. Interestingly, a contentfree task like Copy also suffers from performance drop if there is no Mode-2 pointer. Specifically,
we find out that for 2/5 runs, the model converges to a suboptimal solution, leading to high variance
-----
Figure 7: Testing accuracy (mean ± std.) at 2(L+1) length over training steps. Different configurations of Mode-1 pointers are trained and evaluated 5 times.
Figure 8: Testing accuracy (mean ± std.) at 2(L+1) length over training steps. Different configurations of Mode-2 pointers are trained and evaluated 5 times.
-----
Failure example
Task Chat-GPT PANM
Input Chat-GPT Output True Output
Copy 100% N/A 84%
Reverse 69% $%&&$%ˆ@%# %#ˆ@ˆ%$&&%$ #%@ˆ%$&&%$ 84%
Mix 42% $%&&$%ˆ@%# %#ˆ&&$%$@%& $%$&$%$@$# 98%
Dynamic Recall 14% $(&&$#ˆ@%# % $ @ 45%
Table 6: Failure of Chat-GPT on algorithmic reasoning test cases of length 2L. Token-level accuracy
is reported. We do not test Chat-GPT on Priority and ID sort because they have complicated token
representations. PANM results cannot be directly compared, and shown for reference only.
and slightly lower mean accuracy. Perhaps, Mode-2 pointer allows the decoder to access the input
instantly (like content-based attention), avoid forgetting, and thus, improve the prediction as the
sequence is longer. Having more Mode-2 pointers generally improves the generalization in Copy
and Priority Sort, yet the gain is small for H [c] = 2, or even negative in ID Sort. Therefore, we
trade-off performance with computing efficiency by setting H [c] = 1 in our experiments.
D.6 Failures of Chat-GPT in Our Tasks
Large Language Models (LLMs), especially Chat-GPT, have shown remarkable results in reasoning
and generalization. Directly comparing Chat-GPT with other models used in our experiments would
be unfair because Chat-GPT was not directly trained with our datasets and it has much more parameters than our model. Therefore, in this section, we merely use Chat-GPT as a tool to verify that our
chosen tasks, despite being simple, are non-trivial. The evaluated tasks are algorithmic reasoning
and SCAN. We do not examine Dyck recognition because the output encoding is complicated to
represent in text. Other datasets are more common and likely to be used for training Chat-GPT, thus,
are not suitable for generalization test. For example, in Mathematics task, if we ask Chat-GPT the
question from the data What is the hundreds digit of 31253?, it provide the correct answer
(2). However, slightly modifying the question to ensure it does not appear in the training and testing
set will successfully fool Chat-GPT:
- Example 1:
– Prompt: What is the hundreds digit of 312537?
– Chat-GPT answer: The hundreds digit of the number 312537 is 2.
- Example 2:
– Prompt: What is the hundreds digit of 319253?
– Chat-GPT answer: The hundreds digit of the number 319253 is 9.
We use Open AI’s Chat-GPT 3.5 version September and evaluate the model on our data using fewshot example prompts, following the format:
Examples:
input x[1]1[, x][1]2[, ...][ output][ y]1[1][, y]2[1][, ...]
input x[2]1[, x][2]2[, ...][ output][ y]1[2][, y]2[2][, ...]
...
Question:
input x1, x2, ... output
Algorithmic Reasoning To ensure that Chat-GPT does not memorize the output answer from
its vast training data, we use non-digit symbols: ˜!@#$%ˆ&*( as 10 tokens of the datasets. For
each task, we sample 20 training examples of length L = 5 to build the in-context examples, and
test on 1 longer sequence of length 2L = 10. We conduct 20 trials and report the average test
accuracy. Table 6 summaries the evaluation results. Overall, except for Copy task where Chat-GPT
shows excellent generalization, other tasks are very hard for Chat-GPT, indicating that the length
extrapolation problem still poses a big challenge to today AI techniques.
SCAN In this task, we sample 20 examples in the L-cutoff=40 split set (easiest) as in-context
learning examples and evaluate on 10 unseen sequences. Chat-GPT totally failed in this task. When
-----
Copy Task 9(L) 10(L+1) 20((L+1)*2) 40((L+1)*4) 80((L+1)*8)
LSTM 100±0 47±0 11±0 10±0 10±0
Location Attention 100±0 93±2 51±5 28±4 20±1
Content Attention 100±0 92±1 53±0 33±0 22±0
Hybrid Attention 100±0 91±1 50±1 23±3 13±0
Transformer 100±0 20±1 16±0 13±0 11±0
NTM 100±0 74±4 13±2 11±0 11±0
DNC 100±0 54±2 11±1 11±0 11±0
Neural Stack 100±0 90±4 47±2 29±0 17±0
PtrNet 100±0 90±2 52±1 32±1 20±0
NRAM 100±0 81±3 15±2 11±0 11±1
ESBN 100±0 92±0 34±0 11±0 11±0
PANM 100±0 100±0 84±1 52±1 36±1
Table 7: Copy: accuracy (mean ± std. over 5 runs)
Reverse Task 9(L) 10(L+1) 20((L+1)*2) 40((L+1)*4) 80((L+1)*8)
LSTM 100±0 96±0 53±0 33±0 22±0
Location Attention 100±0 26±3 18±1 14±0 12±0
Content Attention 100±0 81±25 38±11 23±4 16±2
Hybrid Attention 100±0 98±1 50±7 24±2 15±1
Transformer 100±0 18±0 15±3 13±1 11±0
NTM 100±0 95±7 65±27 26±13 13±1
DNC 100±0 93±3 60±18 23±6 12±1
Neural Stack 100±0 96±1 64±4 35±3 19±1
PtrNet 100±0 77±5 32±4 22±1 12±0
NRAM 100±0 96±1 60±3 33±2 15±2
ESBN 99±0 95±0 14±2 11±0 10±0
PANM 100±0 100±0 84±3 51±1 33±1
Table 8: Reverse: accuracy (mean ± std. over 5 runs)
testing on the similar length or longer length as the examples, Chat-GPT cannot produce any exact
match results (exact match accuracy=0). Below are some failure examples:
- IN: walk and turn opposite right OUT:
– Chat-GPT output: I TURN RIGHT I TURN RIGHT I WALK
– True output: I WALK I TURN RIGHT I TURN RIGHT
- IN: run around left twice and run around right OUT:
– Chat-GPT output: I RUN I TURN LEFT I RUN I TURN LEFT I RUN
I TURN RIGHT I RUN
– True output: I TURN LEFT I RUN I TURN LEFT I RUN I TURN LEFT I RUN
I TURN LEFT I RUN I TURN LEFT I RUN I TURN LEFT I RUN I TURN LEFT
I RUN I TURN LEFT I RUN I TURN RIGHT I RUN I TURN RIGHT I RUN
I TURN RIGHT I RUN I TURN RIGHT I RUN
-----
Mix Task 9(L) 10(L+1) 20((L+1)*2) 40((L+1)*4) 80((L+1)*8)
LSTM 100±0 96±0 53±0 33±0 22±0
Location Attention 100±0 92±10 56±1 45±0 30±6
Content Attention 100±0 61±8 57±1 14±0 12±0
Hybrid Attention 100±0 98±1 56±3 34±0 23±6
Transformer 100±0 18±0 15±3 13±1 11±0
NTM 100±0 95±7 65±27 26±13 13±1
DNC 100±0 91±4 58±9 19±3 11±1
Neural Stack 100±0 87±3 50±5 14±2 11±0
PtrNet 100±0 59±3 51±3 13±1 11±0
NRAM 99±0 82±7 48±6 17±4 10±1
ESBN 99±0 95±0 14±2 11±0 10±0
PANM 100±0 100±0 98±1 54±0 54±1
Table 9: Mix: accuracy (mean ± std. over 5 runs)
Drecall Task 9(L) 10(L+1) 20((L+1)*2) 40((L+1)*4) 80((L+1)*8)
LSTM 85±7 74±16 21±2 12±1 11±0
Location Attention 88±1 82±1 30±3 19±2 13±0
Content Attention 88±2 84±0 27±3 17±1 13±1
Hybrid Attention 69±25 66±24 28±4 19±2 13±1
Transformer 33±1 32±0 22±0 14±1 12±1
NTM 86±3 72±8 22±1 15±0 12±0
DNC 89±0 83±1 22±1 14±2 11±0
Neural Stack 85±4 76±2 23±1 15±1 13±1
PtrNet 65±14 48±7 25±6 14±1 12±1
NRAM 61±6 59±4 21±4 13±2 11±1
ESBN 90±1 86±1 22±3 11±1 10±0
PANM 92±0 89±0 45±1 22±0 16±0
Table 10: Drecall: accuracy (mean ± std. over 5 runs)
PSort Task 10(L) 11(L+1) 21((L+1)*2) 41((L+1)*4) 81((L+1)*8)
LSTM 87±2 83±2 28±3 16±1 12±1
Location Attention 69±3 66±3 45±1 27±2 20±2
Content Attention 97±0 96±0 57±6 30±7 22±5
Hybrid Attention 85±3 81±1 33±1 25±2 23±3
Transformer 71±9 48±8 21±3 16±4 14±4
NTM 96±2 95±3 34±18 12±1 10±0
DNC 95±0 92±2 29±7 11±1 10±0
Neural Stack 92±2 79±2 32±3 13±2 11±1
PtrNet 77±2 71±2 43±2 24±1 19±1
NRAM 82±3 80±2 51±2 25±1 13±1
ESBN 26±4 24±4 13±2 11±1 10±0
PANM 97±0 97±1 86±2 32±7 27±4
Table 11: PSort: accuracy (mean ± std. over 5 runs)
-----
ID Sort Task 10(L) 11(L+1) 21((L+1)*2) 41((L+1)*4) 81((L+1)*8)
LSTM 48±10 40±5 20±1 13±1 11±1
Location Attention 34±1 32±1 20±0 14±0 12±0
Content Attention 98±1 56±1 28±2 16±0 12±0
Hybrid Attention 32±1 31±1 19±1 14±0 12±0
Transformer 34±2 29±0 19±0 15±0 12±0
NTM 40±23 32±17 16±4 12±2 11±0
DNC 35±1 36±1 23±2 17±2 13±0
Neural Stack 33±3 32±1 19±1 13±1 12±0
PtrNet 27±1 24±1 15±0 12±1 11±0
NRAM 31±2 29±1 14±0 12±0 11±0
ESBN 47±18 42±12 18±0 12±0 10±0
PANM 100±0 100±0 56±2 25±0 15±0
Table 12: ID Sort: accuracy (mean ± std. over 5 runs)
Task add or sub place value
U. TRM+ RPE[♣] 0.97 ± 0.01 0.75 ± 0.10
TRM + RPE[♣] 0.91 ± 0.03 -
TRM + RPE[♦] 0.91 ± 0.04 0 ± 0
TRM[♣] 0.89 ± 0.01 0.12 ± 0.07
TRM[♦] 0.86 ± 0.01 0.05+0.05
U. TRM[♣] 0.94 ± 0.01 0.20 ± 0.02
PANM TRM base (Ours) 0.91 ± 0.01 0.15 ± 0.02
PANM TRM + RPE base (Ours) 0.97 ± 0.02 0.86 ± 0.05
Table 13: Mathematics: mean ± std accuracy over 5 runs. ♣ are numbers from Csord´as et al. [2021].
♦ is our rerun to confirm the results, which, in some cases, could not match the reported numbers. means training crash reported in the original papers. We run PANM using the authors’ codebase.
-----
| [
"Hung, Le",
"Dung, Nguyen",
"Kien, Do",
"Svetha, Venkatesh",
"Truyen, Tran"
] | 2024-04-18T00:00:00 | null | false | 0 | 0 | null | https://arxiv.org/abs/2404.11870v1 | https://arxiv.org/abs/2404.11870 | https://www.semanticscholar.org/paper/e5383863d4cfc4a31136c9c0d57debf964d2811f |
Enhancing Logical Reasoning in Large Language Models through Graph-based Synthetic Data | Despite recent advances in training and prompting strategies for Large Language Models (LLMs), these models continue to face challenges with complex logical reasoning tasks that involve long reasoning chains. In this work, we explore the potential and limitations of using graph-based synthetic reasoning data as training signals to enhance LLMs' reasoning capabilities. Our extensive experiments, conducted on two established natural language reasoning tasks -- inductive reasoning and spatial reasoning -- demonstrate that supervised fine-tuning (SFT) with synthetic graph-based reasoning data effectively enhances LLMs' reasoning performance without compromising their effectiveness on other standard evaluation benchmarks. | It is demonstrated that supervised fine-tuning (SFT) with synthetic graph-based reasoning data effectively enhances LLMs' reasoning performance without compromising their effectiveness on other standard evaluation benchmarks. | [
"Jiaming, Zhou",
"Abbas, Ghaddar",
"Ge, Zhang",
"Jianye, Hao",
"Yingxue, Zhang",
"Liheng, Ma",
"Bin, Wang",
"Yaochen, Hu",
"Soumyasundar, Pal",
"Mark, Coates"
] | 2024-09-18T00:00:00 | null | false | 0 | 0 | null | http://arxiv.org/abs/2409.12437 | https://arxiv.org/abs/2409.12437 | https://www.semanticscholar.org/paper/f94d498fdd5b841622e3e4d733f6d8e92ac3a69c |
|
Enhancing Numerical Reasoning with the Guidance of Reliable Reasoning Processes | Numerical reasoning is an essential ability for NLP systems to handle numeric information. Recent research indicates that fine-tuning a small-scale model to learn generating reasoning processes alongside answers can significantly enhance performance. However, current methods have the limitation that most methods generate reasoning processes with large language models (LLMs), which are “unreliable” since such processes could contain information unrelated to the answer. To address this limitation, we introduce enhancing numerical reasoning with reliable processes (Encore), which derives the reliable reasoning process by decomposing the answer formula, ensuring which fully supports the answer. Nevertheless, models could lack enough data to learn the reasoning process generation adequately, since our method generates only one single reasoning process for one formula. To overcome this difficulty, we present a series of pre-training tasks to help models learn the reasoning process generation with synthesized data. The experiments show that Encore yields improvement on all five experimental datasets with an average of 1.8%, proving the effectiveness of our method. | This work introduces Enhancing NumeriCal reasOning with Reliable procEsses (Encore), which derives the reliable reasoning process by decomposing the answer formula, ensuring which fully supports the answer. | # Enhancing Numerical Reasoning with the Guidance of Reliable Reasoning Processes
**Dingzirui Wang, Longxu Dou, Xuanliang Zhang, Qingfu Zhu[∗], Wanxiang Che**
Harbin Institute of Technology
{dzrwang, lxdou, xuanliangzhang, qfzhu, car}@ir.hit.edu.cn
**Abstract**
Numerical reasoning is an essential ability for
NLP systems to handle numeric information.
Recent research indicates that fine-tuning a
small-scale model to learn generating reasoning processes alongside answers can significantly enhance performance. However, current
methods have the limitation that most methods
generate reasoning processes with large language models (LLMs), which are “unreliable”
since such processes could contain information
unrelated to the answer. To address this limitation, we introduce Enhancing NumeriCal
reasOning with Reliable procEsses (ENCORE),
which derives the reliable reasoning process
by decomposing the answer formula, ensuring
which fully supports the answer. Nevertheless,
models could lack enough data to learn the reasoning process generation adequately, since our
method generates only one single reasoning
process for one formula. To overcome this difficulty, we present a series of pre-training tasks
to help models learn the reasoning process generation with synthesized data. The experiments
show that ENCORE yields improvement on all
five experimental datasets with an average of
1.8%, proving the effectiveness of our method[1].
**1** **Introduction**
Numerical reasoning is an essential ability for NLP
systems to handle arithmetic questions in real scenarios, which refers to generating answers to numerical questions with given evidence (Geva et al.,
2020). The evidence is to support reasoning since
the question could not furnish all necessary information, as seen in passages and tables in numerical
data (Zhu et al., 2021b,a; Chen et al., 2021). The answer generated encompasses various elements, including values (Dua et al., 2019), programs (Chen
et al., 2021), and formulas (Zhu et al., 2021a)[2].
_∗Corresponding author._
[1Our code is released in link.](https://github.com/zirui-HIT/Encore)
2For the sake of conciseness in this paper, we collectively
refer to these elements as formulas.
**Table**
**Year** **Data Group (Col1)** **Mobileye (Col2)** **…** **Total (Col8)**
**2018 (Row1)** $5,421 $10,278 … $24,389
**2019 (Row2)** $5,424 $10,290 … $24,513
**Text**
During the third quarter of 2018, we made an organizational change to
combine … approximately $480 million of goodwill was reallocated.
**Question**
What is the percentage change of total goodwill from 2018 to 2019?
**Answer**
(24,513 - 24,389) / 24,389
**LLM Generation** **Formula Decomposition**
**Rationale** **Operand**
Find the difference between the {Col8, Row2}, {Col8, Row1},
total amount in 2018 and 2017, {Col8, Row1}
which is divided by 2017. **Operator**
**Additionally, the text mentions** (x1 - x2) / x3
**a change in goodwill allocation** **Located Formula**
**in the third quarter of 2018,** ({Col8, Row2} - {Col8, Row1}) /
**which should be considered.** {Col8, Row1}
Figure 1: The reasoning processes generated by
gpt-3.5-turbo and ENCORE. The left process is described with natural language, where bold words are
unrelated to the answer. The right process contains three
parts designed in ENCORE that fully support the answer.
Currently, although LLMs have demonstrated
great performance on the numerical reasoning
(Chen et al., 2022a; Gao et al., 2022), we argue that it is still valuable to study and employ
the small-scale model (e.g., BARTLARGE (Lewis
et al., 2020)) since their low computational efficiency and decent performance, which still have application value in real scenarios. Previous research
has demonstrated that teaching small-scale models
to generate reasoning processes during fine-tuning
can make the prediction more accurate and explainable (Cobbe et al., 2021; Ho et al., 2023; Magister
et al., 2023). For example, SCOTT (Wang et al.,
2023) employs LLMs to generate reasoning processes based on questions and answers, which are
used to fine-tune small-scale models. However, the
reasoning processes of the current methods could
be unreliable since most methods employ LLMs to
generate the processes, where such processes could
contain information that does not support the answer (e.g., bold words in the left part of Figure 1).
10812
-----
To address this issue, we present a novel numerical reasoning method called Enhancing NumeriCal
reasOning with Reliable procEsses (ENCORE).
Our method decomposes the operands and operators from the answers as the reasoning process,
which we concatenate with the answer as the output
of the model. The generated reasoning process of
ENCORE is reliable since the process entirely derives from decomposing the answer, ensuring that
the process does not contain answer-unrelated information and fully supports the answer, as shown
in the right part of Figure 1. However, our method
could lack enough data to enable the model to learn
the reasoning process generation adequately, as for
one answer, our method generates only one reasoning process, while methods based on LLMs
can generate multiple processes (Ho et al., 2023).
To overcome this difficulty, we present a series of
pre-training tasks to help models learn the process
generation with synthesized data.
To evaluate the effectiveness of ENCORE, we
adopt it on five mainstream numerical reasoning
datasets with various settings. Compared with baseline models, ENCORE brings performance improvement on all datasets and leads an average boost
of 1.8%, showing the effectiveness and generalization of ENCORE. Moreover, in comparison to
fine-tuning with the reasoning process generated by
gpt-3.5-turbo, our method is superior by about
10%, which proves that the reasoning process generated by ENCORE has higher quality.
Our contributions can be summarized as follows:
- To ensure that generated reasoning processes
fully support answers, we propose ENCORE,
which generates reliable reasoning processes by
decomposing answer formulas.
- To alleviate the difficulty of the insufficient data
scale of ENCORE, we introduce a series of pretraining tasks, which enhance the generation of
the reasoning process with synthesized data.
- To prove the effectiveness of ENCORE, we evaluate it with five mainstream numerical reasoning datasets, which yield performance improvement on all experimental datasets with an average
boost of 1.8% compared with baselines.
**2** **Background**
The numerical reasoning task is to generate single
or multiple formulas as answers based on the given
question as well as the textual and tabular evidence.
About the input, textual evidence contains several
paragraphs, and tabular evidence consists of one
or more tables, where the first row and column
are called table headers, which describe information about the cells of the corresponding rows or
columns. About the output, one formula consists
of operators and operands (e.g., 2 + 1 × 3). The
operands refer to the values manipulated or processed in the formula (e.g., 2, 1, 3). The operators
are arithmetic symbols or functions that are to be
performed on the operands (e.g., +, ×).
However, in practical applications, answers
could not be annotated with formulas. We also
try to apply our method to such data without formulas. For the questions with simple calculations (e.g.,
DROP (Dua et al., 2019)), we can directly generate
the formulas following the previous work[3]. For the
more complex calculations (e.g., GSM8K (Cobbe
et al., 2021)), we employ LLMs with in-context
learning, which can generate answers with a few
samples without fine-tuning, to prove the effectiveness of the reasoning process we designed.
**3** **Methodology**
In this section, we present our numerical reasoning method called ENCORE. First, we introduce
the pipeline of ENCORE, which generates reliable
reasoning processes fully supporting the answers
(§ 3.1). Then, we propose three pre-training tasks
based on ENCORE to enhance the generation performance of the reasoning process (§ 3.2).
**3.1** **ENCORE**
In this section, we introduce ENCORE, which enhances numerical reasoning by generating reliable
reasoning processes decomposed from answers.
The illustration of ENCORE is shown in Figure 2.
**3.1.1** **Retrieve**
Because evidence irrelevant to the question could
mislead the model, causing performance degradation, we employ a retriever to retrieve the questionrelevant evidence. We concatenate each text paragraph and table column with questions, then feed
it into a binary classification model to generate correlation confidence. The classification model is
trained with the ground evidence annotated by the
dataset or the headers in the located formulas obtained in § 3.1.2. Then, we sort each text paragraph
and table column with the correlation confidence
[3https://github.com/allenai/](https://github.com/allenai/allennlp-reading-comprehension)
[allennlp-reading-comprehension](https://github.com/allenai/allennlp-reading-comprehension)
10813
-----
**Candidate Evidence** **Retrieved Evidence**
Our contract assets consist of capitalized… <None Used>
**Current: Federal** **State** **Deferred: Federal** **State** **Current: Federal**
**Year** **Year**
**(Col1)** **(Col2)** **(Col3)** **(Col4)** **(Col1)**
**1. Retrieve**
**2018 (Row1)** 1.1 0.43 0.034 0.016 **2018 (Row1)** 1.1
**2019 (Row2)** 1.3 0.42 0.47 0.033 **2019 (Row2)** 1.3
**Fine-Tuning Input** **Fine-Tuning Output**
<Q> What is the average… | <V> {Col1, Row1} | {Col1, Row2} |
<T> … | Federal | NR 2018 | … | <D> ( x1 + x2 ) / 2 |
<P> … <A> ({Col1,Row1}+{Col1,Row2})/2
**4. Fine-Tune**
**Operators**
( x1 + x2 ) / 2
**Answer** **Located Formula**
(1.1+1.3) / 2 ({Col1, Row1} + {Col1, Row2}) / 2
**2. Locate** **3. Decompose** **Operands**
{Col1, Row1}, {Col1, Row2}
Figure 2: The illustration of ENCORE, which takes the question “What is the average current federal of 2018-2019?”
as the example. ENCORE consists of four steps: 1.Retrieve question-related evidence. 2.Locate the table heads of
each value in the formula. 3.Decompose the located formula into operators and operands. 4.Fine-tune the model
with the input and the generated output.
of the model output. We select the top-k evidence
as the retrieval result, where k is determined by the
input length limit, and then we concatenate such
evidence with the question as the model input.
**3.1.2** **Locate**
This step is designed to reduce the difficulty of
value memory and table understanding by changing
the value format in answers. In prior work, it has
been observed that current models struggle to accurately retain complex floating-point values present
in the evidence (Thawani et al., 2021). Besides,
the table understanding ability of most models is
limited, as linearized input disrupts the structural information of the table (Jin et al., 2022). To alleviate
the challenge of extracting numerical values from
tables, we propose substituting values in the answer
by locating their respective headers in the table,
which we call the located formula. For instance, as
illustrated in Figure 2, instead of directly using the
value “1.1”, we use “{Col1, Row1}” corresponding to its cell headers in the table. Consequently,
the model only needs to recall the headers associated with relevant cells, lowering the difficulty
of specific value memory and table understanding,
thereby enhancing the reasoning performance.
We use string matching to locate the cell corresponding to each value in the formula, which could
not handle the cells with the same value. An example of our labeling method is shown in Appendix B
However, how to detect the question-related entities in the evidence is a long-studied problem,
which has been discussed in detail by the previous
works (Liu et al., 2021; Kumar et al., 2023; Wu
et al., 2023). Therefore, to focus on the main topic
of this paper, we will discuss how to merge the
detecting methods with ENCORE in the future.
**3.1.3** **Decompose**
The designed motivation for this step is to reduce
the complexity of reasoning through multi-step generation. Current methods, which ask models to generate formulas in one step, often lead to challenges
in establishing a clear correspondence between the
answer and the input information. For example,
most operands in the formula of the answer are typically extracted from evidence, while the majority
of operators are determined by the semantics of the
question (e.g., “ratio” in the question leads to the
division operator). Moreover, some formulas are
too complex to generate correctly in one step. To
address these issues, we design a multi-step generation process for the model to achieve the numerical
reasoning results. We decompose the formula into
operators and operands, which are used to ask the
model to generate before located formulas. By first
generating the more straightforward operators and
operands and then generating the complete formula,
we can reduce the difficulty in generating formulas,
thereby enhancing accuracy.
**3.1.4** **Fine-tune**
After constructing the located formulas and corresponding operators and operands, we take them
with answers as output and use the question and
10814
-----
**4** **Experiments**
**4.1** **Experiment Setup**
**Datasets** We apply ENCORE on five datasets
with various settings: FinQA (Chen et al., 2021),
ConvFinQA (Chen et al., 2022b), TAT-QA (Zhu
et al., 2021a), MathQA (Amini et al., 2019) and
DROP (Dua et al., 2019), which cover different
types of evidence and answers. More detailed information can be found in Appendix C.
**Metrics** For MathQA, FinQA, and ConvFinQA,
we employ execution accuracy as our evaluation
metrics (Chen et al., 2021). About DROP and TATQA, we evaluate methods with the exact match
(EM) (Zhu et al., 2021a). For TAT-QA, we additionally use Arithmetic EM to represent the EM on
numerical reasoning questions. The definition of
these metrics can be found in Appendix D.
**Baselines** We adopt BERTBASE (Devlin et al.,
2019) as our baseline retrieval model. We use
BARTLARGE (Lewis et al., 2020) and T53B (Raffel et al., 2020) as our baseline seq2seq models.
**Settings** All experimental models are implemented with Huggingface transformers (Wolf et al.,
2020) and Fairseq (Ott et al., 2019). We adopt the
pre-training tasks to TAT-QA, FinQA, and ConvFinQA with 121, 732 synthesized examples since
they contain tabular evidence. More detailed settings are shown in Appendix E.
**4.2** **Main Results**
The main experiment results are summarized in
Table 1, where the detailed results on each dataset
are shown in Appendix F. ENCORE brings performance improvement on all experiment datasets
with all used baseline models and achieves SOTA
or near-SOTA results on most datasets, which
proves the efficiency and generalizability of our
method. Compared to BARTLARGE, our method
exhibits more obvious improvements on T53B, suggesting that larger-scale models can more effectively learn the generation of reasoning processes
and apply the associated capabilities to answer generation. However, our method exhibits an obvious
discrepancy with the current SOTA on DROP. This
is attributed to the low quality of the synthesized
formulas, where the synthesized results could be
incorrect, which subsequently misleads the model
into erroneous reasoning processes, resulting in
poor generation performance.
the retrieved evidence as input to fine-tune the
seq2seq model. During the construction of inputs
and outputs, we use tags like the form “<A>” to
distinguish different parts. We also add other information in the output sequence to meet the requirements of different datasets, such as value scales
(e.g., “billion” and “percentage” in TAT-QA) and
spans (e.g., span-type answers in DROP).
**3.2** **Pre-Training**
With the reasoning process generated by ENCORE,
the model could still struggle to learn how to produce such processes because of the limit of the
training data scale. To aid the model in learning to
generate the reasoning process, in this section, we
introduce three pre-training tasks. We synthesize
questions, answers, and reasoning processes based
on different templates, then pre-train the model
with all these data as the multi-task training. The
template and example of each pre-training task are
shown in Appendix A.
We primarily design pre-training tasks for tabular evidence rather than textual evidence, for two
reasons: (1) Most current pre-trained language
models are trained on textual data, ensuring their
proficiency in generating text-related reasoning processes. (2) Direct linearization of the tabular evidence during input disrupts the structural information, making it challenging for the model to
generate table-related reasoning processes.
**Table Location Prediction** is designed to help
the model better locate operands in the formula by
learning the correspondence between the cell and
the corresponding table headers. Given the row
and column headers of one cell, the model should
predict the value of this cell.
**Table Calculation Prediction** is designed to enhance operator generation. About the tabular evidence, many calculation formulas involve all values
in one column as the operands, such as the average
or total value of a column. We help models learn
the generation of these formulas with the given
column and the calculation type.
**Hierarchical Table Prediction** is designed for
models to perform better operand extraction by
comprehending the hierarchical table structure with
multi-level headers (Zhao et al., 2022), where the
whole table can be seen as several sub-tables. For
this task, models should predict the name of the
first level of each given table header.
10815
-----
**FinQA** **ConvFinQA** **TAT-QA** **MathQA** **DROP**
**Method**
**Dev** **Test** **Dev** **Test** **Dev** **Test** **Dev** **Test** **Dev** **Test**
Published SOTA 69.7 68.0 **76.5** 76.0 N/A[†] **76.8** N/A[†] **83.0** N/A[†] **90.0**
BARTLARGE 62.5 58.8 67.4 71.5 68.5 77.4 78.0 68.6 67.4
_−_
+ ENCORE 64.0 62.3 68.9 74.4 71.0 _−_ 77.7 78.8 69.2 68.4
∆ +1.5 +3.5 +1.5 +2.9 +2.5 _−_ +0.3 +0.8 +0.6 +1.0
T53B 66.9 65.0 73.3 79.6 73.8 78.1 78.6 77.3 77.1
_−_
+ ENCORE **71.6** **69.4** **76.0** **79.8** **75.6** **71.5** **80.0** **80.6** **77.6** **77.1**
∆ +4.7 +4.4 +2.7 +0.2 +1.8 _−_ +1.9 +2.0 +0.3 +0.0
Table 1: The main experiment results of ENCORE. _[†]_ denotes the model does not report the corresponding metric
result. The experiments on TAT-QA lack results on the test set due to it is not public, where only periodic submissions
are allowed for evaluation, so we only evaluate the model that performs best on the dev set. On FinQA, ConvFinQA,
and MathQA, the previous SOTA results are achieved by APOLLO (Sun et al., 2022). The best results on TAT-QA
and DROP are Code and MindOpt Copilot respectively, which papers have not been published. The best results of
our methods are marked in bold. The best results of all methods are marked in underline.
**Setting** **EM** **Arithmetic EM**
ENCORE 74.1 78.6
w/o. Operand 72.7 (−1.4) 75.5 (−3.1)
w/o. Located Formula 73.9 (−0.2) 77.3 (−1.6)
w/o. Operator 73.1 (−1.0) 78.3 (−0.3)
Table 2: The performance of BARTLARGE under different settings on TAT-QA using golden evidence without
pre-training. The Arithmetic EM denotes the EM of the
arithmetic questions.
It is noteworthy that, in comparison to datasets
that solely utilize textual evidence (e.g., MathQA,
DROP), our method exhibits a more significant improvement on datasets with both textual and tabular
evidence (e.g., TAT-QA, FinQA). This is because
our designed located formula addresses the challenge of cell location, where textual evidence does
not involve this challenge. Besides, our designed
pre-training tasks mainly focus on tabular evidence,
so the improvement of textual evidence is less obvious when compared to tabular evidence.
**4.3** **Ablation Studies**
In this section, we perform ablation studies to further evaluate the performance of ENCORE. We use
TAT-QA as our study dataset since it covers various
types of evidence and answers, which can comprehensively reflect the performance of the model.
**4.3.1** **Reasoning Process Studies**
To verify that each designed part of the reasoning
process in ENCORE is effective, we perform ablation experiments on each part separately, which
is shown in Table 2. We can see that each part of
the reasoning process contributes the performance
improvement, which proves the effectiveness of the
**Setting** **EM** **Arithmetic EM**
ENCORE 75.7 81.2
w/o. Table Location 75.2 (−0.5) 79.8 (−1.4)
w/o. Table Calculation 74.6 (−1.1) 79.4 (−1.8)
w/o. Hierarchical Table 75.3 (−0.4) 80.5 (−0.7)
w/o. All 74.1 (−1.6) 78.6 (−2.6)
Table 3: The performance of ENCORE after removing
different pre-training tasks of BARTLARGE on TATQA with golden evidence.
reasoning process designed by our method. According to the arithmetic EM, we can see that extracting
operands has the most apparent impact on model
performance. This is because the model regards the
values in the answer as part of the formula structure,
lacking the awareness of extracting values from evidence. The effect of the located formula is also
apparent, which proves that it is hard for models
to map the table headers to the corresponding cell
value. The improvement of introducing operators is
not significant since the formula operator is similar
to that in the answer.
To verify the impact of generation orders of different parts of the reasoning process, we also adopt
the ablations of the reasoning process format under
two settings: generate operands first and operators
first. Generating operands first leads to 74.1% on
EM, and generating operators first is 73.6% in our
experiment, showing that generating operands first
is better, which is the order we used in ENCORE.
**4.3.2** **Pre-Training Studies**
To prove that all designed pre-training tasks are
effective, we conduct ablation experiments on them.
Table 3 shows the experiment results of ENCORE
with different pre-training settings.
10816
-----
From Table 3, we can observe that: (1) the ablation of each pre-training task leads to a drop in
performance, which proves the effectiveness of
all pre-training tasks; (2) the performance increment of Arithmetic EM is much higher than EM
of all types of questions, which proves that the pretraining does improve model performance by improving numerical reasoning capabilities; (3) Table
Calculation Prediction task leads to the most significant improvement for the model, proving that the
ability to handle operator generation of the baseline
is weak, while our method effectively improves the
ability to handle such a reasoning process.
**4.4** **Analysis**
**Does ENCORE generate better reasoning pro-**
**cesses than using LLMs?** To compare the quality of the reasoning processes generated by EN
CORE and by LLMs, we fine-tune models using reasoning processes produced by both methods. We employ gpt-3.5-turbo to generate
reasoning processes given the question and the
answer with zero-shot inference. The performance of different process sources is shown in
Table 5. From the table, we can see that the model
fine-tuned with the reasoning process generated
by using ENCORE markedly outperforms using
gpt-3.5-turbo. Therefore, the reasoning process
synthesized by our method can better help small
models generate correct results.
**Does ENCORE work well on various answer**
**formats?** To prove our method can handle data
from multiple scenes at the same time, we adapt
ENCORE to the unified setting by merging multiple datasets. We transfer the answer formats of
MathQA, FinQA, and numerical reasoning questions of TAT-QA into the domain-specific language
format (Amini et al., 2019) to unify the answer
format (e.g., 2 + 1 × 3 → _add(2, multiple(1, 3)))._
Then, we train the models in this unified setting and
evaluate them on the unified and divided datasets
respectively to evaluate the performance.
The experimental results are shown in Table 6,
from which we can observe that: (1) compared with
single training, the unified setting achieves much
better performance since more training examples
make the model learn more numerical reasoning
knowledge; (2) ENCORE can further improve the
performance under the unified setting by 1.5% compared with the baselines, demonstrating the generalizability under different answer types.
**Is the reasoning process format of ENCORE still**
**effective for in-context learning?** Although EN
CORE brings great performance improvement, it
cannot handle the questions annotating answers
without formulas (e.g., GSM8K). Considering the
brilliant in-context learning ability of the current
LLMs, we conduct experiments to verify whether
the reasoning process format of ENCORE can still
improve the performance without fine-tuning.
We compare our method with two prompt methods: generate directly and with Chain-of-Thought
(CoT) (Wei et al., 2022), where CoT asks LLMs
to generate answers with the prompt “think it step
by step” using 8-shot prompt following Fu et al.
(2023). The detailed prompts we used are shown
in Appendix G. We evaluate ENCORE on the arithmetic subset of TAT-QA, FinQA under the 3-shot
setting and GSM8K with 8-shot since the questions of GSM8K are harder than the above two
datasets. As shown in Table 7, compared with CoT,
ENCORE brings an average performance improvement of 8.9% on all datasets and LLMs, which
shows that our method is still effective under the
in-context learning setting.
**What is the performance of ENCORE on dif-**
**ferent answer types?** We categorize the predictions based on answer types and sources, which
are shown in Table 4. About the performance of
different question types, compared with the baseline model, ENCORE improves the performance of
arithmetic questions with 4.9%, showing that our
method does improve the numerical reasoning ability. Furthermore, our method also shows enhancements for other types of answers, indicating that
the reasoning process generation can elevate the
reasoning for various answer types to some extent.
About the results of different evidence sources, EN
CORE increases the performance of table-source
and hybrid-source questions, showing that generating located formulas indeed lowers the difficulty of
the table understanding.
However, ENCORE suffers from performance
degradation on the text-source and span-type,
which are mainly span extraction questions. There
are two reasons for this result. Firstly, there are
no fixed rules for annotating span-type answers,
which leads to performance fluctuations during the
prediction. Besides, our method focuses on improving the numerical reasoning ability, and the
additionally generated information (e.g., operands,
operators) could reduce the span extraction ability.
10817
-----
**Type** **Source**
**Method** **Total**
**Span** **Spans** **Arithmetic** **Count** **Text** **Table** **Hybrid**
BARTLARGE 73.0 77.0 73.7 37.5 58.9 73.1 82.4 72.6
+ ENCORE 71.6 77.9 78.6 46.9 56.3 76.2 84.6 74.1
∆ _−1.4_ +0.9 +4.9 +9.4 _−2.6_ +3.1 +1.8 +1.5
Table 4: The exact match of BARTLARGE with and without ENCORE on TAT-QA, which uses the golden evidence
and is without pre-training. Type denotes the types of dataset questions. Source denotes the evidence type that
contains the answer-related information, whereas hybrid includes both text and table.
**Method** **Arithmetic EM**
BARTLARGE 73.7
+ gpt-3.5-turbo[†] 74.7
+ ENCORE **78.6**
Table 5: The performance on TAT-QA with different
reasoning process sources. † denotes fine-tuning with
the rationale generated by gpt-3.5-turbo. The best
performance is marked in bold.
**BARTLARGE** + ENCORE
**Dataset**
**Single** **Unified** **Single** **Unified**
**Method** **TAT-QA[∗]** **FinQA** **GSM8K**
code-davinci-002 36.2 12.8 19.3
+ CoT 45.4 19.8 60.3
+ ENCORE **46.0** **35.1** **66.3**
gpt-3.5-turbo 25.2 9.9 7.9
+ CoT 38.2 28.2 63.1
+ ENCORE **55.2** **39.8** **71.3**
Llama2-70b 18.5 13.7 16.0
+ CoT 41.9 21.6 54.4
+ ENCORE **49.2** **35.1** **55.3**
Table 7: The execution accuracy of in-context learning
with different prompt methods and LLMs. _[∗]_ denotes the
numerical reasoning questions. The best performances
of different datasets and LLMs are marked in bold.
type since the operand extraction and the located
formula make the model not need to memorize specific values, lowering the difficulty of reasoning;
_(3) the operand error types still account for the_
main part of the bad cases with ENCORE, which
requires follow-up work to continue to improve the
operand extraction ability of models.
**4.5** **Case Study**
To better understand how ENCORE improves the
numerical reasoning ability, we show an example
case of TAT-QA in Figure 4, which requires locating the cell value based on the column name
and the row name. The baseline model generates a
wrong number 754, which does not exist in the table, showing that the baseline method makes it hard
to detect the question-related value in the table.
With ENCORE, the model correctly corresponds
the header names in the question to {col10, row2},
and then extracts the corresponding value 774 by
locating the header without model reasoning. That
is because our method obviates the need for the
model to memorize specific values and reduces the
complexity of table understanding, thereby decreasing the difficulty of the operand extraction.
MathQA 79.3 **82.7** 79.5 **84.4**
TAT-QA[∗] 73.7 **79.5** 78.6 **79.8**
FinQA 63.1 **66.1** 65.0 **68.0**
Mixture[†] - **79.5** - **81.0**
Table 6: The execution accuracy on the single and the
unified dataset with golden evidence. _[∗]_ denotes the numerical reasoning questions. The best of each method
is highlighted in bold. The best of each dataset is highlighted in underline. _[†]_ denotes the result on the dev set
mixture of all three datasets.
**What are the main error types of ENCORE?** To
better understand how our method improves the numerical reasoning performance of models and to
better observe the direction of future improvement,
we study the current error distribution on numerical
reasoning questions. We categorize the error cases
into three types: (1) operand denotes that the extracted operators is incorrect; (2) operator means
that the model makes mistakes in the operator generation; (3) other is the error other than the above
two categories. We randomly select 256 numerical
questions and then analyze them manually, which
is shown in Figure 3.
From Figure 3, we can observe that: (1) with
ENCORE, the model makes fewer mistakes on all
error types, showing that our method can significantly improve the model performance on both
operator generation and operand extraction; (2) the
most significant error drop is in the operand error
10818
-----
Although LLMs have demonstrated impressive
performance in the numerical reasoning task, their
substantial computational overhead hinders their
deployment in practical applications. To address
this issue, we explore the use of small-scale models with low computation for numerical reasoning,
enhancing their reasoning capabilities by training
them to generate reasoning processes.
**Answering with Reasoning Processes** Previous
research has indicated that concurrently generating
reasoning processes while producing answers can
significantly enhance the accuracy of the responses
(Wei et al., 2022; Chu et al., 2023). Subsequent
studies have revealed that LLMs exhibit varying
performance when generating different types of
reasoning processes (Chen et al., 2022a; Gao et al.,
2022; Ziqi and Lu, 2023). Apart from LLMs, researchers also find that fine-tuning small-scale models using reasoning processes generated by LLMs
can also improve performance (Cobbe et al., 2021;
Ho et al., 2023; Wang et al., 2023; Magister et al.,
2023). Considering the lower computational overhead and acceptable performance of small-scale
models, such works remain worthy of research.
However, the aforementioned methods confront
the challenge wherein the reasoning process generated by LLMs could not fully support the answers.
To address this limitation, our method directly obtains the reliable reasoning process by decomposing operators and operands from the answers. Our
method enhances the quality of the generated reasoning process, thereby improving the accuracy of
the generated answers.
**6** **Conclusion**
80
60
40
20
operand operator other
|BARTLARGE w. ENCORE|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
Figure 3: The number of bad cases of numerical reasoning questions on TAT-QA using BARTLARGE with and
without ENCORE under different error types. #Cases
denotes the number under different error types.
|{Col10, Row2} | <A> ( {Col10, Row1} - {Col10, Row2} ) / {Col10, Row2}|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|{Col10, Row2} | <A> ( {Col10, Row1} - {Col10, Row2} ) / {Col10, Row2} Evidence||||||
|Year|…|Data and devices (Col9)|Appliances (Col10)|Total (Col11)|…|
|2019 (Row1)|…|993|680|13,448|…|
|2018 (Row2)|…|1,068|774|13,988|…|
**Question**
What was the percentage change in the amount for Appliances in 2019
from 2018?
**Output**
**Baseline: … | <D> | <V> | <A> ( 680 - 754** ) / 754
**Encore: … | <D> ( x1 - x2 ) / x3 | <V> {Col10, Row1} | {Col10, Row2} |**
**{Col10, Row2} | <A> ( {Col10, Row1} - {Col10, Row2} ) / {Col10, Row2}**
**Evidence**
Figure 4: An example of TAT-QA dev set of
BARTLARGE with and without ENCORE. The correct
entities are highlighted in green. The incorrect entities
are highlighted in red.
**5** **Related Work**
**Numerical Reasoning** Numerical reasoning is a
widely researched topic to handle questions about
documents with rich numerical information. Previous works have found that it is hard for the model
to generate the numerical answers directly and explore generating reasoning processes for enhancing
interpretability and extra supervision for answer
generation (Ling et al., 2017). Based on this finding, many researchers have designed model structures to generate reasoning processes implicitly
(Ling et al., 2017; Zhu et al., 2021a; Lei et al.,
2022; Shao et al., 2022; Zhang and Moshfeghi,
2022). Considering the powerful few-shot inference ability of LLMs without fine-tuning, Wei et al.
(2022) presents the chain-of-thought to generate the
reasoning process by LLMs themselves, attracting
much attention (Wang et al., 2022; Ye and Durrett,
2022; Kojima et al., 2022; Chen et al., 2022a).
We propose a novel numerical reasoning method
called ENCORE to address the limitations of the
reasoning process generation. Compared with previous methods, ENCORE can guarantee to generate
reliable reasoning processes that fully support the
answer and aim models to learn to generate the process with pre-training. According to experiments,
ENCORE brings significant performance improvement over all five experimental datasets, leading to
an average improvement of 1.8% compared with
baselines, which shows the effectiveness and generalization of our method. Meanwhile, our method is
superior by about 10% in comparison to the reasoning process generated by gpt-3.5-turbo, which
proves the higher quality of the reasoning processes
generated by ENCORE.
10819
-----
**Limitations**
ENCORE has two limitations, including: 1. About
the operand extraction, we directly check whether
each operand appears in the evidence, which could
lead mistake. For future work, we will adapt better grounding methods, enhancing the extraction
accuracy. 2. It is required that the training data
be labeled with formulas, demanding high label
overhead. In future work, we will employ LLMs to
synthesize formulas, thereby reducing label cost.
**Ethics Statement**
All datasets and models used in this paper are publicly available, and our usage follows their licenses
and terms.
**Acknowledgment**
We thank all anonymous reviewers for their constructive comments. We gratefully acknowledge
the support of the National Natural Science Foundation of China (NSFC) via grant 62236004,
62206078 and 62441603 and the support of Du
Xiaoman (Beijing) Science Technology Co., Ltd.
**References**
Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik
Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. MathQA: Towards interpretable math
word problem solving with operation-based formalisms. In Proc. of AACL.
Ewa Andrejczuk, Julian Eisenschlos, Francesco Piccinno, Syrine Krichene, and Yasemin Altun. 2022.
Table-to-text generation and pre-training with TabT5.
In Proc. of EMNLP Findings.
Kunlong Chen, Weidi Xu, Xingyi Cheng, Zou Xiaochuan, Yuyu Zhang, Le Song, Taifeng Wang, Yuan
Qi, and Wei Chu. 2020. Question directed graph
attention network for numerical reasoning over text.
In Proc. of EMNLP.
Wenhu Chen, Xueguang Ma, Xinyi Wang, and
William W Cohen. 2022a. Program of thoughts
prompting: Disentangling computation from reasoning for numerical reasoning tasks. ArXiv preprint.
Zhiyu Chen, Wenhu Chen, Charese Smiley, Sameena
Shah, Iana Borova, Dylan Langdon, Reema Moussa,
Matt Beane, Ting-Hao Huang, Bryan Routledge, and
William Yang Wang. 2021. FinQA: A dataset of
numerical reasoning over financial data. In Proc. of
_EMNLP._
Zhiyu Chen, Shiyang Li, Charese Smiley, Zhiqiang
Ma, Sameena Shah, and William Yang Wang. 2022b.
ConvFinQA: Exploring the chain of numerical reasoning in conversational finance question answering.
In Proc. of EMNLP.
Zheng Chu, Jingchang Chen, Qianglong Chen, Weijiang
Yu, Tao He, Haotian Wang, Weihua Peng, Ming Liu,
[Bing Qin, and Ting Liu. 2023. A survey of chain of](http://arxiv.org/abs/2309.15402)
[thought reasoning: Advances, frontiers and future.](http://arxiv.org/abs/2309.15402)
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian,
Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro
Nakano, Christopher Hesse, and John Schulman.
[2021. Training verifiers to solve math word prob-](http://arxiv.org/abs/2110.14168)
[lems.](http://arxiv.org/abs/2110.14168)
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language understanding. In Proc. of AACL.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel
Stanovsky, Sameer Singh, and Matt Gardner. 2019.
DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proc. of
_AACL._
Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng,
[and Tushar Khot. 2023. Chain-of-thought hub: A](http://arxiv.org/abs/2305.17306)
[continuous effort to measure large language models’](http://arxiv.org/abs/2305.17306)
[reasoning performance.](http://arxiv.org/abs/2305.17306)
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon,
Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2022. Pal: Program-aided language
models. arXiv preprint arXiv:2211.10435.
Mor Geva, Ankit Gupta, and Jonathan Berant. 2020.
Injecting numerical reasoning skills into language
models. In Proc. of ACL.
Namgyu Ho, Laura Schmid, and Se-Young Yun. 2023.
Large language models are reasoning teachers. In
_Proc. of ACL._
Nengzheng Jin, Joanna Siebert, Dongfang Li, and Qing[cai Chen. 2022. A survey on table question answer-](http://arxiv.org/abs/2207.05270)
[ing: Recent advances.](http://arxiv.org/abs/2207.05270)
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners.
Vishwajeet Kumar, Yash Gupta, Saneem Chemmengath,
Jaydeep Sen, Soumen Chakrabarti, Samarth Bharadwaj, and Feifei Pan. 2023. Multi-row, multi-span
distant supervision for Table+Text question answering. In Proc. of ACL.
Fangyu Lei, Shizhu He, Xiang Li, Jun Zhao, and Kang
Liu. 2022. Answering numerical reasoning questions
in table-text hybrid contents with graph-based encoder and tree-based decoder. In Proc. of COLING.
10820
-----
Mike Lewis, Yinhan Liu, Naman Goyal, Marjan
Ghazvininejad, Abdelrahman Mohamed, Omer Levy,
Veselin Stoyanov, and Luke Zettlemoyer. 2020.
BART: Denoising sequence-to-sequence pre-training
for natural language generation, translation, and comprehension. In Proc. of ACL.
Xiao Li, Yin Zhu, Sichen Liu, Jiangzhou Ju, Yuzhong
Qu, and Gong Cheng. 2022. Dyrren: A dynamic
retriever-reranker-generator model for numerical reasoning over tabular and textual data. ArXiv.
Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word
problems. In ACL.
Qian Liu, Dejian Yang, Jiahui Zhang, Jiaqi Guo, Bin
Zhou, and Jian-Guang Lou. 2021. Awakening latent grounding from pretrained language models for
semantic parsing. In Proc. of ACL Findings.
Lucie Charlotte Magister, Jonathan Mallinson, Jakub
Adamek, Eric Malmi, and Aliaksei Severyn. 2023.
Teaching small language models to reason. In Proc.
_of ACL._
Rungsiman Nararatwong, Natthawut Kertkeidkachorn,
and Ryutaro Ichise. 2022. KIQA: Knowledgeinfused question answering model for financial tabletext data. In Proceedings of Deep Learning Inside
_Out (DeeLIO 2022): The 3rd Workshop on Knowl-_
_edge Extraction and Integration for Deep Learning_
_Architectures._
[OpenAI. 2023. Gpt-4 technical report.](http://arxiv.org/abs/2303.08774)
Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan,
Sam Gross, Nathan Ng, David Grangier, and Michael
Auli. 2019. fairseq: A fast, extensible toolkit for
sequence modeling. In Proc. of AACL.
Adam Paszke, Sam Gross, Francisco Massa, Adam
Lerer, James Bradbury, Gregory Chanan, Trevor
Killeen, Zeming Lin, Natalia Gimelshein, Luca
Antiga, Alban Desmaison, Andreas Köpf, Edward
Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang,
Junjie Bai, and Soumith Chintala. 2019. Pytorch:
An imperative style, high-performance deep learning
library. In Proc. of NeurIPS.
Colin Raffel, Noam Shazeer, Adam Roberts, Katherine
Lee, Sharan Narang, Michael Matena, Yanqi Zhou,
Wei Li, and Peter J. Liu. 2020. Exploring the limits
of transfer learning with a unified text-to-text transformer. Journal of Machine Learning Research.
Qiu Ran, Yankai Lin, Peng Li, Jie Zhou, and Zhiyuan
Liu. 2019. NumNet: Machine reading comprehension with numerical reasoning. In Proc. of EMNLP.
Zhihong Shao, Fei Huang, and Minlie Huang. 2022.
Chaining simultaneous thoughts for numerical reasoning. In ArXiv preprint.
Jia Sun, Hang Zhang, Chen Lin, Yeyun Gong, Jian Guo,
and Nan Duan. 2022. Apollo: An optimized training
approach for long-form numerical reasoning. ArXiv.
Avijit Thawani, Jay Pujara, Filip Ilievski, and Pedro
Szekely. 2021. Representing numbers in NLP: a
survey and a vision. In Proc. of AACL.
Peifeng Wang, Zhengyang Wang, Zheng Li, Yifan Gao,
Bing Yin, and Xiang Ren. 2023. SCOTT: Selfconsistent chain-of-thought distillation. In Proc. of
_ACL._
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le,
Ed Chi, Sharan Narang, Aakanksha Chowdhery, and
Denny Zhou. 2022. Self-consistency improves chain
of thought reasoning in language models. In ArXiv
_preprint._
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten
Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le,
and Denny Zhou. 2022. Chain of thought prompting
elicits reasoning in large language models. In Proc.
_of NeurIPS._
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen,
Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu,
Teven Le Scao, Sylvain Gugger, Mariama Drame,
Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-art natural language processing.
In EMNLP.
Jian Wu, Yicheng Xu, Yan Gao, Jian-Guang Lou, Börje
Karlsson, and Manabu Okumura. 2023. TACR: A
table alignment-based cell selection method for HybridQA. In Proc. of ACL Findings.
Ramil Yarullin and Sergei Isaev. 2023. Numerical embeddings for reasoning over text and tables.
Xi Ye and Greg Durrett. 2022. The unreliability of
explanations in few-shot prompting for textual reasoning. In Proc. of NeurIPS.
Jiaxin Zhang and Yashar Moshfeghi. 2022. ELASTIC:
Numerical reasoning with adaptive symbolic compiler. In Proc. of NeurIPS.
Yilun Zhao, Yunxiang Li, Chenying Li, and Rui Zhang.
2022. MultiHiertt: Numerical reasoning over multi
hierarchical tabular and textual data. In Proc. of ACL.
Fan Zhou, Mengkang Hu, Haoyu Dong, Zhoujun Cheng,
Shi Han, and Dongmei Zhang. 2022a. Tacube:
Pre-computing data cubes for answering numericalreasoning questions over tabular data. _ArXiv_
_preprint._
Yongwei Zhou, Junwei Bao, Chaoqun Duan, Youzheng
Wu, Xiaodong He, and Tiejun Zhao. 2022b. Unirpg:
Unified discrete reasoning over table and text as program generation. ArXiv preprint.
10821
-----
**A** **Template and Example of Pre-Training**
The templates and examples of our pre-training
method are shown in Table 8.
**B** **Example of Automatically Labeling**
Take Table 9, the question “I want to know the
_balance sum from 2018 to 2020” and the label-_
ing answer “(113, 246 − 18, 967) + (120, 523 −
19, 786) + (125, 843 − 21, 355)” as an example.
The grammar of the answer formula format is
shown in Table 10, where we omit the specific descriptions of Operator and Operand. Following
this grammar, the given answer can be parsed as “(
_Operand Operator Operand ) Operator ( Operand_
_Operator Operand ) Operator ( Operand Oper-_
_ator Operand )”. We replace all Operand with_
_x1, x2, ..., and keep all Operator as the original_
symbols. Then, we can get the formula structure
“(x1 _x2) + (x3_ _x4) + (x5_ _x6)”._
_−_ _−_ _−_
About the operands, we can directly extract the
values corresponding to the operands. In this example, they are “113, 246, 18, 967, 120, 523, 19, 786,
125, 843, and 21, 355”, which are also the extracted
values. After that, we employ the string match
method and locate their positions in the table and
get _Col2, Row1_, _Col1, Row1_, _Col2, Row2_,
_{_ _}_ _{_ _}_ _{_ _}_
_Col1, Row2_, _Col2, Row3_, _Col1, Row3_,
_{_ _}_ _{_ _}_ _{_ _}_
which are the extracted entities. Although the
method above is sample yet effective, how to detect
the question-related entities in the evidence completely correct should be carefully studied (e.g.,
MITQA (Kumar et al., 2023), TACR (Wu et al.,
2023)). Considering the complexity of this issue,
we leave how to solve it as future work.
Correspondingly, the ENCORE output
of this example is “<V> _Col2, Row1_ _|_
_{_ _}_
_Col1, Row1_ _|_ _Col2, Row2_ _|_ _Col1, Row2_
_{_ _}_ _{_ _}_ _{_ _}_
_|_ _Col2, Row3_ _|_ _Col1, Row3_ _|_ _<D>_
_{_ _}_ _{_ _}_
(x1 _x2) + (x3_ _x4) + (x5_ _x6)_ _|_
_−_ _−_ _−_
_<A>_ ( _Col2, Row1_ _Col1, Row1_ ) +
_{_ _}_ _−_ _{_ _}_
( _Col2, Row2_ _Col1, Row2_ ) +
_{_ _}_ _−_ _{_ _}_
( _Col2, Row3_ _Col1, Row3_ ) ”.
_{_ _} −{_ _}_
**C** **Experiment Datasets**
In this section, we discuss the detailed information
on the datasets of the experiments. The settings of
these datasets are shown in Table 11
**FinQA** A financial HybridQA dataset while it
only contains numerical reasoning questions, just
like MathQA. The operations of FinQA are fewer
Fengbin Zhu, Wenqiang Lei, Youcheng Huang, Chao
Wang, Shuo Zhang, Jiancheng Lv, Fuli Feng, and TatSeng Chua. 2021a. TAT-QA: A question answering
benchmark on a hybrid of tabular and textual content
in finance. In ACL.
Fengbin Zhu, Wenqiang Lei, Chao Wang, Jianming
Zheng, Soujanya Poria, and Tat-Seng Chua. 2021b.
Retrieving and reading: A comprehensive survey on
open-domain question answering. ArXiv preprint.
Jin Ziqi and Wei Lu. 2023. Tab-CoT: Zero-shot tabular
chain of thought. In Proc. of ACL Findings.
10822
-----
**Pre-Training Task** **Question** **Answer**
What is { Col_i, Row_j } ? The cell value of the headers.
Table Location Prediction
_What is { Col3, Row2 } ?_ **_0.47_**
What is the max/min/sum/average of Col_i ? The formula of the column.
Table Calculation Prediction
_What is the sum of Current : Federal ?_ **_{ Col1, Row1 } + { Col1, Row2 }_**
What is the { Col_i, Row_j } belong to ? The first-level header of the cell.
Hierarchical Table Prediction
_What is the { Col2, Row2 } belong to ?_ **_Current_**
Table 8: Templates and examples of pre-training data. In each block, the first line is the template, and the second line
is the example marked with the italics. All answers to the examples are extracted from Figure 2. The replaceable
parts are highlighted in bold.
|Dataset|Domain Evidence Answer|
|---|---|
|||
|FinQA ConvFinQA TAT-QA MathQA DROP GSM8K|Finance Hybrid Formula Finance Hybrid Formula Finance Hybrid Span MWP Text Choice Wikipedia Text Span MWP Text Formula|
Table 11: The settings of the experimental datasets.
ConvFinQA is the only context-dependent dataset.
**MathQA** A numerical reasoning dataset containing DSL-format questions involves more than one
hundred scientific operations. Because the maximum step number of one answer is 66, which is too
long for the model to generate, we only train the
model on the example with the number of answer
program steps less than ten that have covered the
93% dataset questions. MathQA consists of 37k
problems, split into (80/12/8)% training/dev/test
sets. We do not add the values to the inputs of
MathQA since we do not design the alias for the
text.
**DROP** A reading comprehension dataset contains three types of answers, including spans, date
and number, where number questions only require
_+ and - operations. Considering that it does not_
annotate the calculation process, we first extract all
numbers in the evidence, calculate the similarity
between the question and the words around every
number, then select the legal calculation formula
with maximum similarity. DROP includes a total
of 96,567 question-answer pairs and is partitioned
into training (80%), development (10%), and test
(10%) sets.
**GSM8K** The GSM8K is a collection of 8,500
meticulously created and linguistically varied math
word problems suitable for grade school levels,
crafted by professional problem writers. This
dataset is divided into two sections: 7,500 problems
**Year** **Outcome (Col1)** **Income (Col2)**
2018 (Row1) 18,967 113,246
2019 (Row2) 19,766 120.523
2020 (Row3) 21,355 125,843
2021 (Row4) 22,312 130,725
Table 9: An example of tabular evidence.
**Rules**
Formula → Formula Operator Formula
Formula → ( Formula )
Formula → Operand
Table 10: An example of answer formula grammar.
and more straightforward than MathQA, which
only includes less than ten elementary operations.
Following the settings of TAT-QA, we also locate
the program values’ positions as extracted entities.
FinQA contains 8,281 examples and is released as
training (6,251), validation (883), and test (1,147)
following a 75%/10%/15% split.
**ConvFinQA** A context-dependent version of
FinQA, which decomposes the complex numerical reasoning questions into multiple steps. During
the validation, it evaluates the model predictions of
all steps.
**TAT-QA** A financial HybridQA dataset containing textual as well as tabular evidence and four
types of answers including span, multi-span, count,
and arithmetic. We follow the origin derivation
format of the dataset and extract all values in the
derivation, then locate the positions of the values
in the table as extracted entities. TAT-QA consists
of 16,552 question-answer pairs and is split into
the training set (80%), development set (10%), and
test set (10%).
10823
-----
**Dev** **Test**
**Method**
**EM** **F1** **EM** **F1**
_Discriminative Methods_
NumNet+ (Ran et al., 2019) 81.1 84.4 81.5 84.8
QDGAT (Chen et al., 2020) **84.1** **87.1** **84.5** **87.6**
_Generative Methods_
GPT-3.5[∗] (OpenAI, 2023) - - - 64.1
GPT-4[∗] (OpenAI, 2023) - - - 80.9
BART (Lewis et al., 2020) 68.6 71.7 67.4 70.7
BART w. ENCORE 69.2 72.2 68.4 71.4
Table 12: The performance of different methods of
DROP. _[∗]_ denotes the few-shot setting. The best results
are marked in bold.
for training and 1,000 for testing. Each problem is
designed to be solved in 2 to 8 steps, predominantly
involving a series of basic arithmetic operations
(addition, subtraction, multiplication, division) to
attain the ultimate solution. These problems are
intended to be solvable by middle school students
and are ideal for enhancing multi-step mathematical reasoning skills.
**D** **Evaluation Metrics**
**Exact Match** measures the percentage of predictions that match the ground truth answers. Usually,
two arithmetic answers are considered equal if their
four decimal places are equal, following the rule of
rounding function.
**Numeracy-Focused F1 Score** measures the average token-level overlap between the predictions
and the ground truth answers, which can reduce
false negative labeling. When an answer has multiple spans, the numeracy-focused F1 performs a
one-to-one alignment greedily based on the bagof-word overlap on the set spans to ensure every
current span can get the highest F1 value, then compute micro-average F1 over each span.
**Program Accuracy** measures the accuracy of
operands and operators between the predicted programs and the golden programs.
**Execution Accuracy** measures the accuracy of
the execution result of the predicted programs.
**E** **Hyper-Parameter Settings**
Our model is implemented with PyTorch (Paszke
et al., 2019), transformers (Wolf et al., 2020) and
Fairseq (Ott et al., 2019). About the retrieval model,
we set the learning rate as 2e-5 and the dropout as
**Method** **Exe** **Prog**
FinQANet (Chen et al., 2021) - 79.2
ELASTIC (Zhang and Moshfeghi, 2022) - **83.0**
BART (Lewis et al., 2020) 79.6 78.0
BART w. ENCORE 80.5 78.8
T5 (Raffel et al., 2020) 81.8 78.6
T5 w. ENCORE 82.9 80.6
Table 13: The performance of different methods of
MathQA test set. Exe denotes the execute accuracy, and
Prog denotes the program accuracy. The best results are
marked in bold.
**Dev** **Test**
**Method**
**EM** **F1** **EM** **F1**
TagOp (Zhu et al., 2021a) 55.2 62.7 50.1 58.0
TaCube (Zhou et al., 2022a) 57.1 65.6 - -
KIQA (Nararatwong et al., 2022) - - 58.2 67.4
UniRPG (Zhou et al., 2022b) 70.2 77.9 67.4 75.5
RegHNT (Lei et al., 2022) 73.6 81.3 70.3 78.0
AeNER (Yarullin and Isaev, 2023) **78.5** **86.0** **75.0** **83.2**
BART (Lewis et al., 2020) 65.7 73.0 - -
BART w. ENCORE 71.0 78.9 - -
T5 (Raffel et al., 2020) 73.8 80.9 - -
T5 w. ENCORE 75.8 82.8 71.5 79.5
Table 14: The performance of different publicly available methods of TAT-QA. The best results are marked
in bold.
0.1. We select three negative examples for every
positive instance, set batch size as 16, and max
epoch as 20, which takes 10 hours for training. For
every question, we retrieve the top 5 as candidate
evidence. About the generation models of PLMs,
we consider it a Seq2Seq task with label-smoothed
cross-entropy loss. We set the learning rate as 1e5, dropout as 0.1, and weight decay as 0.05. We
use max tokens as 8192 during every step, update
the model parameter every four updates, and warm
up with 5000 updates. We set the max epoch as 1
for pre-training, 100 for BARTLARGE, and 20 for
T53B, save the model every ten epochs during finetuning and use early-stop checkpoints. Any other
hyper-parameters following the default settings of
the package. To lower the difficulty of table understanding, we mark the numerical order of each
column in the table. About the LLMs generation
models, we set top_p as 0.95 and temperature as 0.
We employ one NVIDIA A100 40G GPU as
our experiment device. The retrieval model takes
around 6 hours for training. The PLMs generation model takes around 1 hour for pre-training, 12
hours for BARTLARGE fine-tuning, and 48 hours
for T53B fine-tuning. The LLMs generation model
takes about 1.5 hours to infer for 1k examples.
10824
-----
|Model|FinQA Exe Prog|ConvFinQA Exe Prog|
|---|---|---|
||Exe Prog|Exe Prog|
|FinQANet (Chen et al., 2021) DyRRen (Li et al., 2022) APOLLO (Sun et al., 2022) TabT5†(Andrejczuk et al., 2022)|61.2 58.9 63.3 61.3 68.0 65.6 70.8 68.0|68.9 68.2 - - 76.0 74.6 - -|
|BART (Lewis et al., 2020) BART w. ENCORE T5 (Raffel et al., 2020) T5 w. ENCORE|58.8 54.4 62.3 57.2 65.0 58.3 69.4 63.7|71.5 69.5 74.4 72.2 79.6 77.3 79.8 77.9|
Table 15: The performance of different methods of
FinQA and ConvFinQA private test sets. Exe denotes
the execute accuracy, and Prog denotes the program
accuracy. _[†]_ denotes results with ensemble. The best
results are marked in bold. The best results of methods
without ensemble are marked in underline.
**F** **Detailed Experiment Results**
In this section, we show the detailed results of each
experiment dataset in Table 12, Table 13, Table 14
and Table 15.
**G** **Prompts of ENCORE with LLMs**
In this section, we present the prompts we used
with LLMs in § 4.4 in Table 16, Table 17 and Table 18.
10825
-----
Answer the given question based on the given evidence.
You should generate an formula to answer the arithmetic question.
When answering the question, you should firstly generate the used entities.
Then you generate the formula structure.
Finally you generate the answer formula based on the entities and the formula structure.
Read the following text and table, and then answer a question:
17. Income Taxes
Income before income taxes for the Company’s domestic and foreign operations was as follows:
— | — | Years Ended June 30, | —
($ in millions) | 2019 | 2018 | 2017
Domestic | $204.2 | $140.3 | $56.0
Foreign | 11.8 | 19.9 | 14.2
Income before income taxes | $216.0 | $160.2 | $70.2
Quesetion: What was the change in Foreign in 2019 from 2018?
Entities: 11.8 | 19.9
Formula: x0 - x1
Answer: 11.8 - 19.9
Read the following text and table, and then answer a question:
Effective Income Tax Rate
A reconciliation of the United States federal statutory income tax rate to our effective income tax rate is as follows:
In 2019 and 2018 we had pre-tax losses of $19,573 and $25,403, respectively, which are available for carry forward
to offset future taxable income. We made determinations to provide full valuation allowances for our net deferred tax
assets at the end of 2019 and 2018, including NOL carryforwards generated during the years, based on our evaluation
of positive and negative evidence, including our history of operating losses and the uncertainty of generating future
taxable income that would enable us to realize our deferred tax.
— | Year Ended | Year Ended
— | December 31, 2018 | December 31, 2019
United States federal statutory rate | 21.00% | 21.00%
State taxes, net of federal benefit | 1.99% | -0.01%
Valuation allowance | -21.96% | -24.33%
Cumulative effect of accounting change | — | 2.07%
R&D Credit | 1.34% | 1.53%
Other | -0.38% | -0.27%
Effective income tax rate | 1.99% | -0.01%
Question: What was the 2019 percentage change in pre-tax losses?
Entities: 19,573 | 25,403 | 25,403
Formula: (x0 + x1) / x2
Answer: (19573 + 25403) / 25403
Read the following text and table, and then answer a question:
Year Ended May 31 | Expected life (in years) | risk-free interest rate | Volatility | Dividend yield | Weighted-average
fair value per share
2019 | 4.6 | 2.7% | 24% | 1.7% | $10.77
2018 | 4.7 | 2.0% | 22% | 1.5% | $9.34
2017 | 4.8 | 1.0% | 23% | 1.5% | $8.18
Question: What was the average dividend yield for the 3 years from 2017 to 2019?
Entities: 1.7% | 1.5% | 1.5%
Formula: (x0 + x1 + x2) / 3
Answer: (1.7 + 1.5 + 1.5) / 3
Table 16: The prompt of TAT-QA.
10826
-----
Solve the following questions with the programs.
The program consists of a sequence of operations.
Each operation takes a list of arguments.
There are 6 mathematical operations: $add$, $subtract$, $multiply$, $divide$, $greater$, $exp$.
And 4 table aggregation operations: $table-max$, $table-min$, $table-sum$, $table-average$.
The mathematical operations take arguments of either numbers from the given text and table, or a numerical result
from a previous step.
The table operations take arguments of table row names.
We use the special token #n to denote the result from the nth step.
The given information is enough to solve the question.
Read the following text and table, and then answer a question:
$ in millions | year ended december 2014 | year ended december 2013 | year ended december 2012
fixed income currency and commodities client execution | $ 8461 | $ 8651 | $ 9914
equities client execution1 | 2079 | 2594 | 3171
commissions and fees | 3153 | 3103 | 3053
securities services | 1504 | 1373 | 1986
total equities | 6736 | 7070 | 8210
total net revenues | 15197 | 15721 | 18124
operating expenses | 10880 | 11792 | 12490
pre-tax earnings | $ 4317 | $ 3929 | $ 5634
Question: what was the percentage change in pre-tax earnings for the institutional client services segment between
2012 and 2013?
Entities: 3929, 5634, 5634
Formula: divide(subtract(x0, x1), x2)
Answer: divide(subtract(3929, 5634), 5634)
Read the following text and table, and then answer a question:
during the year ended march 31, 2012, the company has recorded $ 3.3 million in stock-based compensation
expense for equity awards in which the prescribed performance milestones have been achieved or are probable of
being achieved .
- | number of shares ( in thousands ) | weighted average grant date fair value ( per share )
restricted stock and restricted stock units at beginning of year | 407 | $ 9.84
granted | 607 | 18.13
vested | -134 ( 134 ) | 10.88
forfeited | -9 ( 9 ) | 13.72
restricted stock and restricted stock units at end of year | 871 | $ 15.76
Question: during the 2012 year, did the equity awards in which the prescribed performance milestones were
achieved exceed the equity award compensation expense for equity granted during the year?
Entities: 607, 18.13, 1000, 3.3, 1000000
Formula: greater(multiply(multiply(x0, x1), x2), multiply(x3, x4))
Answer: greater(multiply(multiply(607, 18.13), const_1000), multiply(3.3, const_1000000))
Read the following text and table, and then answer a question:
- | september 24 2005 | september 25 2004 | september 27 2003
beginning allowance balance | $ 47 | $ 49 | $ 51
charged to costs and expenses | 8 | 3 | 4
deductions ( a ) | -9 ( 9 ) | -5 ( 5 ) | -6 ( 6 )
ending allowance balance | $ 46 | $ 47 | $ 49
Question: what was the highest ending allowance balance, in millions?
Entities: ending allowance balance
Formula: table_max(x0, none)
Answer: table_max(ending allowance balance, none)
Table 17: The prompt of FinQA.
10827
-----
Answer the given question.
You firstly generate the used values, which must be mentioned in the question.
Then you generate the formula structure.
Finally you generate the answer formula based on the values and the formula structure.
You only need to generate the formula without any other words, not to calculate the answer.
Question: An aquarium holds an equal number of clownfish and blowfish. 26 of the blowfish stay in their own tank,
and the remaining blowfish swim into a display tank. An equal number of clownfish join the blowfish in the display
tank, but then a third of these clownfish swim back into their own tank. If the aquarium holds a combined total of
100 fish, how many clownfish are now in the display tank?
Entities: total_fish = 100 | blowfish_in_own_tank = 26
Formula: total_blowfish_fish = total_fish / 2 | blowfish_in_display_tank = total_blowfish_fish - blowfish_in_own_tank | clownfish_in_display_tank = blowfish_in_display_tank | ans = clownfish_in_display_tank
- 2 / 3
Answer: (100 / 2 - 26) * 2 / 3
Question: Janet’s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her
friends every day with four. She sells the remainder at the farmers’ market daily for $2 per fresh duck egg. How
much in dollars does she make every day at the farmers’ market?
Entities: total_eggs = 16 | eaten_eggs = 3 | baked_eggs = 4 | dollars_per_egg = 2
Formula: sold_eggs = total_eggs - eaten_eggs - baked_eggs | ans = sold_eggs * dollars_per_egg
Answer: (16 - 3 - 4) * 2
Question: A robe takes 2 bolts of blue fiber and half that much white fiber. How many bolts in total does it take?
Entities: bolts_of_blue_fiber = 2
Formula: bolts_of_white_fiber = num_of_blue_fiber / 2 | ans = bolts_of_blue_fiber + bolts_of_white_fiber
Answer: 2 + (2 / 2)
Question: Josh decides to try flipping a house. He buys a house for $80,000 and then puts in $50,000 in repairs.
This increased the value of the house by 150%. How much profit did he make?
Entities: cost_of_original_house = 80000 | cost_of_repair = 50000 | increase_rate = 1.5
Formula: value_of_house = (1 + increase_rate) * cost_of_original_house | ans = value_of_house - cost_of_repair cost_of_original_house
Answer: ((1 + 1.5) * 80000) - 50000 - 80000
Question: Every day, Wendi feeds each of her chickens three cups of mixed chicken feed, containing seeds,
mealworms and vegetables to help keep them healthy. She gives the chickens their feed in three separate meals. In
the morning, she gives her flock of chickens 15 cups of feed. In the afternoon, she gives her chickens another 25
cups of feed. How many cups of feed does she need to give her chickens in the final meal of the day if the size of
Wendi’s flock is 20 chickens?
Entities: numb_of_chickens = 20 | cups_for_each_chicken = 3 | cups_in_the_morning = 15 | cups_in_the_afternoon
= 25
Formula: cups_for_all_chicken = num_of_chickens * cups_for_each_chicken | ans = cups_for_all_chicken cups_in_the_morning - cups_in_the_afternoon
Answer: (20 * 3) - 15 - 25
Question: Marissa is hiking a 12-mile trail. She took 1 hour to walk the first 4 miles, then another hour to walk the
next two miles. If she wants her average speed to be 4 miles per hour, what speed (in miles per hour) does she need
to walk the remaining distance?
Entities: average_mile_per_hour = 4 | total_trail_miles = 12
Formula: remaining_miles = total_trail_miles - 4 - 2 | total_hours = total_trail_miles / average_mile_per_hour |
remaining_hours = total_hours - 2 | ans = remaining_miles / remaining_hours
Answer: (12 - 4 - 2) / ((12 / 4) - 2)
(Ignore two examples because the whole prompt exceeds the length of one single page.)
Table 18: The prompt of GSM8K.
10828
-----
| [
"Dingzirui, Wang",
"Vivek, Srikumar",
"Longxu, Dou",
"Xuanliang, Zhang",
"Qingfu, Zhu",
"Wanxiang, Che",
"Lun-Wei, Ku",
"Andre, Martins"
] | 2024-08-01T00:00:00 | ACL 2024 Long Papers | true | 0 | 0 | null | https://aclanthology.org/2024.acl-long.582 | https://arxiv.org/abs/2402.10654 | https://www.semanticscholar.org/paper/d4f43485c9f71618dfdff884a74aff8a51fc018c |
Evaluating Large Vision-and-Language Models on Children's Mathematical Olympiads | Recent years have seen a significant progress in the general-purpose problem solving abilities of large vision and language models (LVLMs), such as ChatGPT, Gemini, etc.; some of these breakthroughs even seem to enable AI models to outperform human abilities in varied tasks that demand higher-order cognitive skills. Are the current large AI models indeed capable of generalized problem solving as humans do? A systematic analysis of AI capabilities for joint vision and text reasoning, however, is missing in the current scientific literature. In this paper, we make an effort towards filling this gap, by evaluating state-of-the-art LVLMs on their mathematical and algorithmic reasoning abilities using visuo-linguistic problems from children's Olympiads. Specifically, we consider problems from the Mathematical Kangaroo (MK) Olympiad, which is a popular international competition targeted at children from grades 1-12, that tests children's deeper mathematical abilities using puzzles that are appropriately gauged to their age and skills. Using the puzzles from MK, we created a dataset, dubbed SMART-840, consisting of 840 problems from years 2020-2024. With our dataset, we analyze LVLMs power on mathematical reasoning; their responses on our puzzles offer a direct way to compare against that of children. Our results show that modern LVLMs do demonstrate increasingly powerful reasoning skills in solving problems for higher grades, but lack the foundations to correctly answer problems designed for younger children. Further analysis shows that there is no significant correlation between the reasoning capabilities of AI models and that of young children, and their capabilities appear to be based on a different type of reasoning than the cumulative knowledge that underlies children's mathematical skills. | Evaluating state-of-the-art LVLMs on their mathematical and algorithmic reasoning abilities using visuo-linguistic problems from children's Olympiads shows that modern LVLMs do demonstrate increasingly powerful reasoning skills in solving problems for higher grades, but lack the foundations to correctly answer problems designed for younger children. | ## Evaluating Large Vision-and-Language Models on Children’s Mathematical Olympiads
**Anoop Cherian[1]** **Kuan-Chuan Peng[1]** **Suhas Lohit[1]** **Joanna Matthiesen[2]**
**Kevin Smith[3]** **Joshua B. Tenenbaum[3]**
1Mitsubishi Electric Research Labs, Cambridge, MA, 2Math Kangaroo USA NFP
3Massachusetts Institute of Technology, Cambribdge, MA
**Abstract**
Recent years have seen a significant progress in the general-purpose problem
solving abilities of large vision and language models (LVLMs), such as ChatGPT,
Gemini, etc.; some of these breakthroughs even seem to enable AI models to
outperform human abilities in varied tasks that demand higher-order cognitive
skills. Are the current large AI models indeed capable of generalized problem
_solving as humans do? A systematic analysis of AI capabilities for joint vision and_
text reasoning, however, is missing in the current scientific literature. In this paper,
we make an effort towards filling this gap, by evaluating state-of-the-art LVLMs
on their mathematical and algorithmic reasoning abilities using visuo-linguistic
problems from children’s Olympiads. Specifically, we consider problems from
the Mathematical Kangaroo (MK) Olympiad, which is a popular international
competition targeted at children from grades 1-12, that tests children’s deeper
mathematical abilities using puzzles that are appropriately gauged to their age and
skills. Using the puzzles from MK, we created a dataset, dubbed SMART-840,
consisting of 840 problems from years 2020-2024. With our dataset, we analyze
LVLMs power on mathematical reasoning; their responses on our puzzles offer
a direct way to compare against that of children. Our results show that modern
LVLMs do demonstrate increasingly powerful reasoning skills in solving problems
for higher grades, but lack the foundations to correctly answer problems designed
for younger children. Further analysis shows that there is no significant correlation
between the reasoning capabilities of AI models and that of young children, and
their capabilities appear to be based on a different type of reasoning than the
cumulative knowledge that underlies children’s mathematics and logic skills.
**Introduction**
“Mathematics is not about numbers, equations, computations, or algorithms:
_it is about understanding.”_
_William Paul Thurston_
Recent multimodal artificial intelligence frameworks incorporating large vision and language models
(LVLMs), such as GPT-4o, DALL-E, Gemini, etc., are seen to demonstrate outstanding reasoning
capabilities [41], seemingly flustering our established measures of machine intelligence [10, 17, 21,
22, 33]. These scaled up Transformer models [40] trained on internet-scale datasets using purportedly
simplistic training losses such as mask predictions, suddenly appear to have emergent abilities
rivaling expert human intellect even on tasks demanding higher-level cognition. Such superior
accomplishments naturally raises several questions: Are these models indeed capable of having
core knowledge and generalizing it towards deriving innovative methods for problem solving? Are
they equipped with the faculties to reason like children or are they exploiting implicit biases in their
-----
web-scale training datasets towards generating responses that are seemingly correct? Where do AI
models rank in their generalized intellectual capacities against humans?
There have been several recent studies that attempt to answer the above questions through novel
datasets, tasks, and benchmarks, e.g., MATHVISTA [29], SMART-101 [9], MMMU [43], etc. While,
all these datasets and tasks evaluate varied facets of the generative and reasoning abilities of LVLMs,
they typically compare the performance of an LVLM against prior state-of-the-art (SOTA) AI models.
While, some of these tasks even include human performances, these evaluations use relatively few
human subjects, and do not include the diversity, demographics, background, and other subjective
attributes that could influence the solution scheme, making the comparison of AI models to human
performances to have significant room for speculation. Notably, there appears to be a lack of
a systematic study that benchmarks the capabilities of current SOTA AI models against human
cognition on the respective tasks at scale.
Contrary to current AI models that are potentially trained on web-scale data at once, humans develop
their problem solving abilities over a period of development towards adulthood, and the type and
nature of problems that they can typically solve at different stages of their growth vary significantly.
For example, a first grader may be able to solve problems related to tracing a given curved path,
however a 12-th grader is expected to solve problems related to finding the intersection points of
curves. On the one hand, this incremental nature of building knowledge is essential to the development
of solid human problem solving [7, 14, 16]. On the other hand, this cumulative knowledge gathering
also enforces an order to the way cognitive foundations are established in rational agents, e.g., a
12-th grader is implicitly assumed to have the knowledge to solve problems that a first grader may or
may not be able to solve. If we want artificial generalist models that think and reason like intelligent
humans, we should expect those models to reliably demonstrate more primitive concepts, in order to
build up to reason about more complex problems.
Guided by this insight, we make a first attempt towards systematically comparing the performance
of AI models against children’s abilities over the period of their growth. Similar to previous studies, such as MATHVISTA [29] and SMART-101 [9], we base our approach on the analysis of the
problem-solving skills of LVLMs on mathematical and algorithmic reasoning problems selected from
mathematical Olympiads. In contrast to exams typically given in schools, that test the overall grasp
of taught subject matter, Olympiads often incorporate problems that explore deeper understanding
of concepts, critical thinking abilities, innovative ways of looking at data, and deriving connections
across knowledge for solutions. Among many such math Olympiads (such as IMO, AMC, etc., that
are aimed at higher grade students), one that we use to base our benchmark experiments in this paper
is the international Mathematical Kangaroo (MK) Olympiad [1], an international competition that
has been held since 1991, with nearly 45,000 students from the USA participating in 2024. MK
offers competitions for all grades of children, consisting of age appropriate problems, thus allowing
comparisons of the performance of AI models against children from varied age groups and skill levels.
Further, the problems used in MK competitions follow a multiple choice answer format allowing for
easy and objective evaluations that are directly comparable to children’s responses. The competition
participants are not expected to have any background in advanced mathematical topics (e.g., calculus,
analysis, etc.) for deriving the problem solutions. In addition, the problems can be text only or
contain both image and text, facilitating the evaluation of multimodal reasoning. These aspects make
MK Olympiads particularly suitable for our study, and allowing for a clear gauging of AI’s reasoning
abilities against the extent of developing human intellect.
To conduct our benchmarking experiments, we created an MK Olympiad dataset, dubbed SMART840, by collating all the math competition problems from years 2000-2024 for all grades from
1-12th. Our dataset offers a range of problem difficulties (some targeted at 1-2nd graders, some
at 11-12th graders), and can further subdivide them by how children actually perform on those
problems. Thus for human-like mathematical reasoning we should expect: (a) better performance
on earlier grade problems, and (b) a correlation with problem difficulty within grades. An example
problem is provided in Figure 1. We also collect various performance statistics of children on all
the exams from MK that are useful in our analysis, including the children’s answer responses, time
taken in the solutions, among many others. Next, we shortlist several SOTA LVLMs, including
GPT-4o [34], Gemini [12], Claude-3 Opus [3], XGEN-MM [36], among others, that are popular on
their generalist abilities. Our analysis reveals several interesting aspects of the AI models: namely i)
their performance on the SMART-840 dataset appears to be significantly weaker than the average
performance of children, ii) the performance gap is larger (nearly 30-40%) on the tests for younger
-----
difficult to judge if a model could answer a given test problem, and iv) there appears to be significant
Q: When you put the 4 puzzle pieces together correctly, they form a
rectangle with a calculation on it. What is the result of this
calculation?
A: 6 B: 15 C: 18 D: 24 E: 33
**GPT-4o: To solve this puzzle, we need to arrange the 4 pieces (3, 2, 1, and +) to form a correct mathematical equation. Let's try**
arranging them and checking the validity of each possible equation: 1. 3 + 2 = 1 2. 3 + 1 = 2 3. 2 + 1 = 3 Let’s check each
possibility: 1. "3 + 2 = 1" is incorrect because 3 + 2 equals 5, not 1. 2. "3 + 1 = 2" is incorrect because 3 + 1 equals 4, not 2. 3. "2 +
1 = 3" is correct because 2 + 1 equals 3. Thus, we have: 2 + 1 = 3 So the correct equation formed by the puzzle pieces is: "2 + 1 =
3" Hence, the result of this calculation is: C: 18
**Gemini-Pro: 1. **Identify the Pieces:** The image shows four puzzle pieces, each with a part of a mathematical equation. 2.**
**Visualize the Rectangle:** Imagine fitting these pieces together to form a rectangle. The edges will help us connect the parts of
the equation. 3. **Mentally Arrange:** Piece by piece, try to mentally arrange them. Notice that some pieces have parts of
numbers, and others have operators (+, -). 4. **Form the Equation:** By carefully arranging the pieces, you can form the
equation: ``` 18 + 6 ``` 5. **Calculate:** The result of the calculation 18 + 6 is 24. **D: 24**
Figure 1: A 3rd grader puzzle from our SMART-840 dataset and the LVLM responses (both incorrect).
children and improves to 10-20% for higher-schoolers, iii) there appears to be a lack of any significant
correlation between the difficulty of a puzzle to children against that to an AI model, thus making it
variability in the responses of current AI models when repeating or changing prompts. We believe
our analysis brings new insights into various aspects for testing AI models against human cognition,
as well as sheds light into the issues around reliability of current LVLMs for solving math problems.
Before going further, we emphasize below the primary contributions of this paper.
1. We provide a first of its kind benchmarking of the performance of large vision-and-language
models against the mathematical and algorithmic reasoning abilities of young children using
data from professional math Olympiads.
2. We gauge the reasoning abilities of AI against the cumulative knowledge building progression of children over their growth.
3. Our experiments compare SOTA AI models on both text-only and vision-and-text math
problems, analyzing the performances across multiple dimensions.
**2** **Related Works**
**General LVLM benchmarks: Several benchmarks now exist that test different capabilities of**
LVLMs. These include MMBench [24] which contains thousands of questions in a VQA format
about both perception (1844 questions) and reasoning (1104) where the models select an answer from
a given set of options. It also uses a “circular evaluation” strategy to ensure that the models are robust
to the ordering of answer options. Although logical and relational reasoning are part of the dataset,
this benchmark does not test particularly for different types of mathematical reasoning capabilities.
MMMU [43] is another popular benchmark that contains about 12.5K multimodal questions covering
six different disciplines of study, but only at the college level and tests expert-level knowledge of
LVLMs. In contrast, we are interested in understanding abilities of LVLMs that children demonstrate.
Some benchmarks have been designed to test specific capabilities of LVLMs like ScienceQA [27] for
scientific understanding and reasoning, VisIT-Bench [6] for instruction following, Bongard Problems
[8, 18, 19], Raven’s Progressive Matrices [5] and Abstraction and Reasoning Corpus [10] for abstract
visual reasoning, OCRBench [25] and TextVQA [37] for text recognition, etc.
**Benchmarks for mathematical reasoning: MATHVISTA [29] is a recent benchmark for mathe-**
matical reasoning based on puzzles that involve images, and also measures performance of LVLMs
for different types of mathematical reasoning (logical, arithmetic, geometric, etc.) and different
types of context images (natural images, line plots, scientific figures, etc.). GSM-8k [11] is a similar
dataset containing about 8.5K math word problems, but only has text inputs and outputs, no images
are involved. The main difference of the proposed SMART-840 benchmark against these prior
works is that the these datasets neither separate puzzles based on hardness (e.g., ease of solving
-----
by children at different school levels), which our proposed benchmark explicitly addresses, nor are
they supported by human performances at scale. TabMWP [28] is a benchmark with about 38k
grade-school problems but is limited to just tabular math word problems. SMART-101 [9] is the most
closely related benchmark to ours, that provides programmatically generated 2000 variations for each
of 101 puzzles from just 1st and 2nd grade Math Kangaroo puzzles. These variations can be used to
train larger models than using just a small number of puzzles. We note that we are not the first to
use MK performance statistics for research. In contrast to this dataset, SMART-840 contains 840
puzzles from all grades 1-12 and is designed to benchmark the zero-shot mathematical reasoning
abilities of AI models on a wide range of problem solving skills, along with important information
on performance statistics from test-takers. Further, we note that in [2, 4, 31, 32] MK exams were
used to study the development of mathematical competencies in children, however to the best of our
knowledge, these tests have not been used to benchmark AI models.
**LLMs with external tools: Instead of the LLMs or VLMs providing the answer directly, some works**
propose to use external tools in conjunction with LVLMs for solving visual reasoning problems.
These include works like VisProg [15], Visual ChatGPT [42] and ViperGPT [38] use a base LVLM to
generate a short program that gives a sequence of external image processing and computer vision tools
that need to be used in order to generate the output. Chameleon [30] further extends such methods to
also employ web search and shows improved results on ScienceQA and TabMWP benchmarks. These
works rely on in-content learning with several examples that need to show how to use available tools
and assume access to such tools, which may not be practical. Therefore, we focus on evaluating the
_zero-shot reasoning skills of LVLMs directly. We also note that there have been supervised approaches_
to develop reasoning models for specialized type of Olympiad problems, e.g., Alpha-Geometry [39]
looks at geometry Olympiad problems.
**3** **Benchmarking Approach and Experiments**
We first elucidate our data collection process, followed by the details of the LVLMs that we select
to benchmark in this study. The subsequent subsections evaluate the performances of LVLMs on
various aspects of the Olympiads deriving correlations with the performances of children.
**3.1** **Mathematical Kangaroo Olympiad**
As alluded to above, most of the mathematical Olympiads (such as International Math Olympiad,
AMC-8, AHSME, etc.) are targeted at middle or high-school students, while Math Kangaroo is one
Olympiad that conducts competitions for K-12 grades, making it a compelling source for this study.
Started in France in 1991, the competition has been organized in the USA every year since 1998 and
currently takes place in over 100 countries. Typically, there is a single exam for grades {n, n + 1},
for n ∈{1, 3, 5, 7, 9, 11}, thus there are a total of six exams in a year and children of both grades n
and n + 1 participate in the same exam.
Each exam consists of either 24 questions (for grades 1–4) or 30 questions for all higher grades,
and is in a multiple choice format with 5 candidate answers, of which only one option is the correct
answer. The questions can be purely text-based or can contain both text and an image, interpretation
of both jointly is then usually important for solving the problem. Each question is attributed weights
in {3, 4, 5}, where the lower points are given to problems that are typically deemed “easier” for
that grade (e.g., single step reasoning problems for grade 1) while higher points are attributed to
problems that need multi-level reasoning, enumeration of solution possibilities, etc. that typically
involve deeper (but age appropriate) problem solving skills. The participant is given 75 minutes to
complete an exam and the performance is computed as the weighted sum of correct responses.
**3.2** **Data Collection**
For this study, we envisage a data collection methodology that is fair, balanced, and offers an unbiased
benchmarking of AI models against children’s performance on MK tests. To this end, we decided
to use all the questions from MK exams without any omissions so that there is no selection bias in
our evaluations. As the statistical data we desire on the performance of children is unavailable for
MK competitions prior to year 2020, in this study we consider only questions from competitions
conducted in 2020-2024, which amounts to 840 problems in our dataset, dubbed SMART-840, and
-----
(a) (b) (c)
(d) (e) (f)
Figure 2: Figure 2(a) plots the distributions of children participating in MK Olympiads per year
over 2020–2024 for grades 1–12. Figure 2(b) plots the total number of participants per grade during
2020–2024. Figure 2(c) plots the total number of participants each year over all grades (1-12).
Figure 2(d) shows the number of puzzles and its portion for each category. Figure 2(e) shows the
statistics of image-text and text-only puzzles. Figure 2(f) shows the statistics of puzzle difficulty
(defined by their attributed weights).
consisting of 240 questions (all together) for grades 1–4 and 600 questions from grades 5–12, evenly
split between pairs of grades as described above. Figures 2(a), 2(b), and 2(c) show the distribution
of the number of children who participated across years from all grades, which adds to nearly 30K
students per year. The participant number is highest for grades 1–8 and then drops to less than 1000
for grades 9–12 (Figure 2(b)) perhaps because higher-grade children have other Olympiad options,
_e.g., AMC, IMO, etc. Nevertheless, we see that the number of participants put together from the last_
five years still produce a substantially large sample set for our analysis.
For creating the SMART-840 dataset, we downloaded publicly available[1] question papers (which
are image embedded PDF documents), followed by running optical character recognition software
for extracting the text of the puzzles, and manually cropping the associated image parts. Each such
extracted puzzle in the dataset was manually inspected for errors in its text and puzzle images. MK
also provides a segregation of each puzzle into one of the four categories, namely (i) geometry, (ii)
logic, (iii) algebra, and (iv) numbers. In Figure 2(d), we present the overall statistics of problem
distribution in the SMART-840 dataset. We see that geometric puzzles capture nearly 31% of all
the puzzles in our set, while the split is about equal between logic (26%) and numbers (27%), and
_algebra based problems are about 15.5%. In Figure 2(e), we plot the distribution of the number of_
problems that need both text and image reasoning (∼69%) against those that only have text questions.
Figures 2(d) and 2(e) also show the split across grades. We see that for higher grades (>8), the number
of text-only problems are higher: about 52% in grades 11-12 against about <20% in grades 1-4.
**3.3** **Selected Large Vision-and-Language Models**
We compare the performance of seven popular and SOTA LVLMs on the SMART-840 dataset.
Specifically, we consider i) GPT-4o [34], ii) Gemini-Pro [12], and iii) Claude-3 Opus [3], that are
popular for their abilities in solving challenging math and visual reasoning problems. Thus, we
believe it is a useful exercise to understand how they perform on children’s grade problems. Alongside
these SOTA LVLMs, we also consider other AI models that are popular, such as GPT-4v which is the
1Note that each test paper involves a small cost for download.
-----
first vision-and-language version of the GPT series, ii) Gemini-Flash that is well-known for its faster
response time, and XGen-MM [36], which is a recent open source LVLM in the BLIP series [23].
**3.4** **Grade-wise Performance Comparisons**
In this experiment, we compare the performance of the LVLMs listed above against the performance
of children on our SMART-840 dataset. For the human performance, we report the percentage
of average correct response rate, which we denote as accuracy going forward, and is computed
by: i) finding the ratio of the total number of correct children’s responses on a problem to the
total number of attempts, and (ii) averaging this ratio across all problems in the grade set. For
the LVLMs, we use the API interface to query the model using a suitable hand-crafted prompt.
Specifically, we found the following prompt to work well for all remote LVLMs: "Solve this
```
question with explanation of the intermediate steps. Your response should
end with one of the selected answer options from A1, B2, C3, D4, or E5."
```
which is accompanied by the text for the problem question and the image data.[2] For AI models,
we report their accuracy as the (percentage) of problems correctly answered to the total number of
problems in the set.
In Table 1, we present results comparing the performances of LVLMs against children on the entire
SMART-840 dataset. First, we report a random baseline to benchmark all our results, which is
computed by randomly sampling a response from a probability distribution over all the human
responses across the answer options for a problem. As is clear, all the answer options in the problems
are equally likely, and thus the random performance is close to one-fifth. Next, for each LVLM, we
queried (at least 2 times) each of the problems in SMART-840 using the prompt described above.
Note that for LVLM evaluation we also consider two additional possibilities, namely: (i) if a response
is not in the expected format as demanded in the prompt, and if we are unable to automatically extract
a valid response, we consider the response to be invalid in general (except in experiments when we
manually validate the responses), and (ii) in many cases, an LVLM decides not to solve a problem
(e.g., it mistakes the provided puzzle image to contain security issues), in which case as well, we
declare that problem as unsolved by the respective model. We manually inspected all the output
responses of GPT-4o (reported as GPT-4o (M) in the table) to ensure the our prompt is suitable, and
the model produces responses that are reasonable, grounded in the problem specification (and are
not due to issues such as network failures, response parsing failures, etc.) and its solution attempt is
reasonable (but not necessarily correct). All problems where the solution was unreasonable (even if
the selected option is correct), we manually marked them as a failed response.
We see from Table 1 that GPT-4o performs the best on average across all the grades with an average
accuracy of 42.5%, followed by Claude-3-Opus at 38% and Gemini-Pro at nearly 32%. There are
several intriguing aspects in the performance of LVLMs that we can witness in Table 1.
i) Performance gap: In Table 1, we see that the performance of AI models are below that of
children across the grades and interestingly this gap is consistent in all the models we experimented.
Specifically, the best accuracy of LVLMs are in the range of 40-50% while the children’s average
performance is consistently near 60% or above. Note that we report children’s performances for
each grade separately, where kids of a pair of grades take the same exam. Unsurprisingly, we find
that children of the higher grade (in the pair) perform significantly better than those of lower grades
(although this gap reduces as problem solving abilities mature towards higher grades) suggesting a
cumulative set of core problem solving skills that children build over their growth period.
ii) Performance trend: In Table 1, we see yet another consistent trend of LVLMs, i.e., being better
at solving problems of higher grades (8-12) than at lower grades, which is surprising given the
complexity of solutions increases with grades. This trend was also seen in [9] where the authors
compared the performance of LLMs on second grader problems. We see that while GPT-4o shows
this increasing trend with an accuracy of 40% at grades 1-2 towards nearly 50% for grades 11-12, the
trend is more striking for other LVLMs such as Gemini-Pro that varies from about 25% (for grade 1-2)
to 40% for grades 11-12. We find that Claude-3 Opus produces a reasonably consistent performance
of around 40% albeit having a different trend: dip in the performance for middle grades than lower or
higher grades. We find from Table 1 that open source LVLMs such as XGEN-MM perform poorly in
comparison, specifically, it either selected incorrect answer options or many-a-times did not follow
2The format of A1, B2, etc. allow us to uniquely parse the LVLM response to automatically validate it.
-----
Grade
1 2 3 4 5 6 7 8 9 10 11 12 Mean
Model
Human 58.8 67.6 62.3 70.1 59.1 65.4 59.7 64.3 64.2 69.3 64.9 65.6 64.2
Random 20.1 20.2 20.1 20.2 20.3 20.1 20.1
GPT-4o 41.6 (7.1) 38.6 (1.7) 35.1 (0.8) 47.1 (0.8) 41.3 (2.0) 50 (4.0) 42.4
GPT-4o (M) 42.5 36.7 36.0 46.7 43.3 50.0 42.5
GPT-4v 39.2 (0.6) 38.3 (0.6) 29.3 (3.3) 35.3 (1.9) 38.7 (1.9) 43.3 (3.7) 37.4
Gemini-Pro 25.8 (3.5) 27.5 (0.6) 25.3 (3.3) 30.7 (1.8) 39.3 (3.7) 41.3 (2.8) 31.7
Gemini-Flash 19.2 (0.6) 29.2 (10.4) 22.0 (8.4) 30.7 (9.7) 38.7 (13.7) 36.7 (4.3) 29.4
Claude-3 Opus 38.3 (5.3) 33.3 (5.8) 31.3 (6.6) 40.7 (10.4) 42.0 (5.6) 44.0 (2.8) 38.3
XGen-MM 7.5 9.1 5.3 10.0 8.0 8.0 8.0
Table 1: Accuracy (%) of correct responses of children in the respective grades against the accuracy
of LVLMs when the agent is asked to provide explanation of their responses. GPT-4o (M) is the
responses of GPT-4o manually validated based on their reasoning. We color the cells in the same way
as how the tests are designed (in every 2 grade levels). We report the standard deviation in brackets.
the instruction, thereby producing invalid outputs. See Table 5-10 for examples. Further, we find that
the performance of GPT-4v is inferior to that of GPT-4o, which is expected given the latter being
a more advanced version of GPT-4v. Further, the accuracy of faster LVLMs such as Gemini-Flash
is below that of its advanced counterpart. Thus, in our subsequent study, we only consider the best
performing LVLMs, namely GPT-4o, Gemini-Pro, and Claude-3 Opus.
iii) Variance: We also find that there is a substantial variance in the performance of SOTA LVLMs.
For example, as shown in Table 1, the standard deviation for GPT-4o is nearly 7% in solving 1-2 grade
problems, while there is a reducing trend in the magnitude of this deviation for higher grades, the
reliability in the responses are still questionable. The standard deviation is worse for Claude-3-Opus,
where it is nearly 5% across grades, even reaching 10% for grades 7-8. Interestingly, for grades
11-12, the deviation appears more stable at nearly 3-4%, while the performance is also the best. We
manually inspected the responses of GPT-4o, by re-running the model on specific problems that
seemed to have issues unrelated to the solution based reasoning (e.g., network issues, issues in image
understanding, etc.). In GPT-4o (M), we report the result of responses after this validation, which
appears to be sufficiently close to the original responses, suggesting the lower performances of GPT
are not due to extraneous issues, instead are due to the lack of its knowledge in problem solving on
the respective test questions.
**4** **Analysis of Results**
In this section, we take a deeper look at our results in Table 1 to gain insights into how the AI model
responses correlate with those of children. Even though a model is expected to perform as well as an
adult, given that there performances are below that of children, it is imperative to ask if they at least
behave like high-performing children in their responses? Specifically: we seek to answer the question:
_Are problems that are hard for children also hard for AI? To answer this, we conduct different types_
of correlation analysis, presented below.
**Difficulty Index of a problem [13, 35], is the ratio of the number of correct responses for a test**
problem to the total number of solution attempts. This index provides a score between 0 and 1 for
each problem, where 0 implies none of the children were able to solve it (hard problem). In Table 2
(Diff-I), we report the Pearson correlations coefficient between the difficulty index and the responses
by LVLMs. We see that there is in general only weak correlation between model and human accuracy,
and this correlation mostly occurs at the higher grade levels. This suggests that all LVLMs in general
find a different set of problems to be difficult than children do.
**Discriminative Index [13, 20, 26] measures the use of knowledge by a test taker. To compute this**
score, we split the student population into two groups, the good learners that correspond to the
top-20% of participants who score the highest, and bad learners that constitute the bottom-20%.
Next, we compute the difficulty index for each of these sets separately, and define discriminative
index as the difference between the two difficulty indices. Thus, the value of discriminative index
is in [−1, 1], where 1 corresponds to a test problem where all the good learners produced correct
responses while all the bad learners made a mistake – i.e., a problem that can separate the good
-----
Model \ Grade 1 2 3 4 5 6 7 8 9 10 11 12
GPT-4o 0.14 0.16 0.15 0.17 -0.09 -0.05 0.12 0.13 0.22 0.22 0.20 0.26
Diff-I Gemini-P 0.23 0.27 -0.05 -0.06 0.01 -0.01 0.05 0.06 0.21 0.19 0.20 0.16
Claude-3 0.11 0.13 0.09 0.11 0.08 0.06 0.14 0.15 0.16 0.16 0.25 0.18
GPT-4o -0.07 -0.15 0.07 -0.01 0.07 -0.01 -0.09 -0.08 -0.14 -0.18 -0.11 -0.13
Gemini-P -0.05 -0.25 -0.04 -0.05 -0.01 -0.01 0.01 0.03 -0.18 -0.18 -0.15 -0.13
Disc-I
Claude-3 -0.02 -0.14 0.17 0.06 -0.04 -0.09 -0.07 -0.09 -0.16 -0.11 -0.09 -0.16
GPT-4o -0.08 -0.12 -0.14 -0.10 0.03 -0.03 0.08 0.03 -0.09 -0.07 -0.17 -0.09
Gemini-P -0.06 -0.17 -0.06 -0.06 -0.03 -0.06 0.03 0.03 -0.20 -0.12 -0.27 -0.19
Time-C. Claude-3 0.14 0.10 -0.07 -0.07 -0.04 -0.01 -0.01 -0.07 -0.09 -0.07 -0.16 -0.13
GPT-4o -0.04 -0.04 -0.02 -0.02 -0.00 -0.00 0.08 0.08 0.13 0.13 0.15 0.15
Gemini-P 0.05 0.05 -0.07 -0.07 0.00 0.00 0.02 0.02 0.27 0.27 0.30 0.30
Weight-C. Claude-3 -0.10 -0.10 -0.02 -0.02 0.00 0.00 0.15 0.15 0.18 0.18 0.30 0.30
GPT-4o -0.18 -0.18 -0.15 -0.15 0.10 0.10 -0.14 -0.14 -0.23 -0.23 -0.24 -0.24
Gemini-P -0.26 -0.26 0.03 0.03 -0.01 -0.01 -0.08 -0.08 -0.23 -0.23 -0.19 -0.19
Entropy-C. Claude-3 -0.12 -0.12 -0.06 -0.06 -0.02 -0.02 -0.15 -0.15 -0.18 -0.18 -0.24 -0.24
Table 2: Pearson correlation coefficient (ρ) between the difficulty index (top) and discriminative index
(bottom) of the SMART-840 problems against the responses generated by LVLMs. The green/red
cell color indicates positive/negative ρ, where darker cells represent larger absolute values.
learners from the bad. To understand if an AI model is a good learner or bad learner, we propose to
compute the Pearson correlation between the discriminative index of children’s performances against
that of the models. The result of this analysis is provided in Table 2 (Disc.-I). Surprisingly, we find a
negative trend across all grades, suggesting that an AI model finds it easier to solve problems that are
less discriminative, and whose answer options are plausibly discernible without substantial reasoning.
**Time-taken Correlation analyzes the dependency between how much time children (on average)**
took in answering the problems – and thus potentially capturing the problem hardness – to whether AI
models also find those problems challenging. To this end, we aggregated the duration children spent
on each problem[3], followed by separating the duration into two sets on their median. We marked all
problems above the median as hard and the rest as easy. Next, we computed the Pearson correlation
between the responses of AI models against this hardness. Our results in Table 2 (Time-C.) shows
again a weak negative correlation trend, suggesting that the model finds it easier to solve problems
that take longer for children – a surprising result.
**Weight Correlation measures the interaction between the hardness of a puzzle as attributed by**
MK (through its weight) against the response. Notably, we convert this weight in {3, 4, 5} to
the corresponding difficulty score of {1.0, 0.66, 0.33} for each problem, and compute the Pearson
correlation to the AI responses, which are 1 if the answer option selected to the problem is correct and
0 otherwise. We find a slightly stronger positive correlation on this experiment in Table 2 (Weight-C.),
suggesting the AI is able to solve problems that (the adult creator) thought are of the easier kind.
**Entropy Correlation measures the correlation between the entropy of the distribution of answer**
options for a problem against AI responses. As entropy is higher for problems which are hard or their
options confusing for children, a positive correlation would suggest AI is similarly confounded. However, the trend in Table 2 shows the reverse, with slightly stronger negative correlations, suggesting
that AI models are apparently not much confused on problems children find indecisive.
**Category-Level Performances: As alluded to above, SMART-840 dataset consists of problems**
in four different categories, each involving entirely different skill sets and knowledge background
for their solutions. For the performances reported in Table 1, in Figure 3, we present the results of
humans and LVLMs on the four problem categories, namely (i) geometry, ii) numbers, iii) algebra,
and iv) logic. While children perform consistently well on all these categories, we find that AI models
falter significantly in geometry and logic, with their best performances at about half of humans
while they perform reasonably well on numbers and algebra. We further analyze the performance
of LVLMs on problems involving both image and text (e.g., geometry problems) and text-only
puzzles. The results show that it is indeed the image-text problems that the models struggle with and
we see a strong similarity between performances on geometry and logic problems with image-text
problems. Interestingly, we also find that on text-only puzzles (which are about 30%) in our dataset,
3Using data from remote test takers
-----
GPT-4o-Expl.[4] shows better performances than the average human performance, while other LVLMs
(with suffix “-expl.”) are also performing reasonably well.
Figure 3: Comparison of the average accuracy (%) of humans and LVLMs on each category of the
Olympiad problems with the corresponding radar plot.
**Importance of Reasoning with Explanation:** In this experiment, we changed the LVLM
prompt to: `"Solve this question.` `You should provide a response without any`
```
explanation. Your response should end with one of the selected answer
```
`options from A1, B2, C3, D4, or E5.".` In Figure 3, we show this result for all the
LVLMs (suffixed “-no-expl.”) on the six categories. We see a trend of a dip in performance among
all the models, specifically GPT-4o drops from 49.5% to 17.6% on the highly-performing ’number’
category, and from 63.4% to 31.5% on algebra. The drop is also substantial on text-only problems.
The trend is similar on other LVLMs (e.g., Claude-3), however slightly lower in Gemini-Pro.
**5** **Discussion and Conclusions**
This paper tackles the important problem of understanding the reasoning abilities of LVLMs. Our
analysis using the proposed SMART-840 dataset reveals several intriguing results: i) there is a lack of
any significant correlation between the perceived complexity in solving puzzles by children and by AI
models; instead there are surprising negative correlations, ii) there is a significant trend among LVLMs
in performing low on younger grade problems and progressively get better at higher grades, which
is counter intuitive. While, one may attribute this observation to the availability of better training
data or increased number of text puzzles, it is still unsettling that AI models struggle to perform
even on puzzles involving simple geometry and logic, that accentuates the lack of understanding
between language and multimodal content. Further, while there is a substantial gap between the
best of LVLMs and the worst, or random baselines, 40% for GPT4o vs 20% for random or 25% for
Gemini-Pro, this is only a 20% difference; in contrast the gap between even the best LVLMs and
human adult level performance in reasoning is much greater. We ought to point this out.
Our results suggest some ways that LVLMs, even the most advanced ones, may not really be
reasoning in the ways that humans do. For humans, reasoning is an ability to think that goes beyond
just similarity to training examples. But here we are seeing signs that similarity to the large mass
of training examples appears to be what is driving performance across all levels of these problems.
Of course, we do not fully know what is in the training corpus SOTA LVLMs used. But it may
include many Olympiad problems than there are math kangaroo grade 1-2 style problems. Yet for
people, and not for frontier LVLMs, the MK grade 1-2 problems are far easier than the Olympiad
style grade 11-12 problems. This suggests both that human reasoning is based on a different set of
core competencies, which the early grade problems test, and which a pure machine learning approach
to training reasoning is not really picking up on.
Model \ Grade 1 2 3 4 5 6 7 8 9 10 11 12
GPT-4o 49/57.4 49/30.4 61/26.8 60/13.2 70/44.3 70/29.0 57/46.4 58/39.7 70/23.8 70/16.8 66/21.1 50/29.2
Gemini-P 78/7.3 78/2.3 69/14.9 68/6.6 75/35.6 75/21.9 80/14.4 81/10.0 79/11.8 77/7.7 51/43.7 34/56.9
Claude-3 69/20.6 69/6.7 81/2.5 80/1.1 86/18.3 86/9.3 65/34.3 66/28.9 85/7.1 82/4.0 53/39.8 36/54.6
Table 3: National Rank (↓) / percentile (↑) ranking of LVLMs against children’s performance on MK
2024 Olympiad based on the test scores computed from the model response.
4Which uses a prompt to explain its reasoning.
-----
Before concluding, we present in Table 3, the national rank and percentile of the three SOTA models
on the scores they received for 2024 MK Olympiad (when compared against children). We see that
AI models are substantially below children in ranking, with GPT-4o best on grade 1 at rank 49 and
Gemini-Pro at 34 for grade 12. These scores are based on the percentiles received from MK. The
table shows that there is a large gap to fill for LVLMs against children’s problem solving skills.
**Acknowledgements: Authors thank Preethi Ann Cyril for helping with data curating and Math**
Kangaroo USA for providing the performance data.
**References**
[1] Math Kangaroo USA, NFP Inc. https://mathkangaroo.org/mks/, 2012–2024. 2
[2] Lukas Andritsch, Evita Hauke, and Jakob Kelz. How to create and solve: Analysis of items from
the mathematical kangaroo from two perspectives. Engaging young students in mathematics through
_competitions—World perspectives and practices, 2:117–136, 2020. 4_
[[3] Anthropic. Claude-3 Opus. https://www.anthropic.com/claude-opus, 2023. Accessed: 2024-05-](https://www.anthropic.com/claude-opus)
29. 2, 5
[4] Mark Applebaum and Roza Leikin. Girls’ performance in the kangaroo contest. Including the Highly
_Gifted and Creative Students–Current Ideas and Future Directions, pp. 87, 2019. 4_
[5] Yaniv Benny, Niv Pekar, and Lior Wolf. Scale-localized abstract reasoning. In Proceedings of the
_IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12557–12565, 2021. 3_
[6] Yonatan Bitton, Hritik Bansal, Jack Hessel, Rulin Shao, Wanrong Zhu, Anas Awadalla, Josh Gardner,
Rohan Taori, and Ludwig Schimdt. Visit-bench: A benchmark for vision-language instruction following
inspired by real-world use. arXiv preprint arXiv:2308.06595, 2023. 3
[7] Elizabeth L Bjork, Robert A Bjork, et al. Making things hard on yourself, but in a good way: Creating
desirable difficulties to enhance learning. Psychology and the real world: Essays illustrating fundamental
_contributions to society, 2(59-68), 2011. 2_
[8] MM Bongard. The recognition problem. tech. rep. 1968. 3
[9] Anoop Cherian, Kuan-Chuan Peng, Suhas Lohit, Kevin A. Smith, and Joshua B. Tenenbaum. Are deep
neural networks smarter than second graders? In Proceedings of the IEEE conference on computer vision
_and pattern recognition, 2023. 2, 4, 6_
[10] François Chollet. On the measure of intelligence. arXiv preprint arXiv:1911.01547, 2019. 1, 3
[11] Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias
Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word
problems. arXiv preprint arXiv:2110.14168, 2021. 3
[[12] Google DeepMind. Gemini pro. https://www.deepmind.com/gemini-pro, 2023. Accessed: 2024-](https://www.deepmind.com/gemini-pro)
05-29. 2, 5
[13] Robert L. Ebel and David A. Frisbie. Essentials of Educational Measurement. Prentice Hall, 1991. 7
[14] K Anders Ericsson, Ralf T Krampe, and Clemens Tesch-Römer. The role of deliberate practice in the
acquisition of expert performance. Psychological review, 100(3):363, 1993. 2
[15] Tanmay Gupta and Aniruddha Kembhavi. Visual programming: Compositional visual reasoning without
training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.
14953–14962, 2023. 4
[16] John Hattie and Helen Timperley. The power of feedback. Review of educational research, 77(1):81–112,
2007. 2
[17] José Hernández-Orallo. The measure of all minds: evaluating natural and artificial intelligence. Cambridge
University Press, 2017. 1
[18] Huaizu Jiang, Xiaojian Ma, Weili Nie, Zhiding Yu, Yuke Zhu, and Anima Anandkumar. Bongard-HOI:
Benchmarking few-shot visual reasoning for human-object interactions. In Proceedings of the IEEE/CVF
_Conference on Computer Vision and Pattern Recognition, 2022. 3_
-----
[19] Huaizu Jiang, Xiaojian Ma, Weili Nie, Zhiding Yu, Yuke Zhu, and Anima Anandkumar. Bongard-hoi:
Benchmarking few-shot visual reasoning for human-object interactions. In Proceedings of the IEEE/CVF
_conference on computer vision and pattern recognition, pp. 19056–19065, 2022. 3_
[20] Truman Lee Kelley. The selection of upper and lower groups for the validation of test items. Journal of
_Educational Psychology, 30(1):17, 1939. 7_
[21] Brenden M Lake, Tomer D Ullman, Joshua B Tenenbaum, and Samuel J Gershman. Building machines
that learn and think like people. Behavioral and brain sciences, 40:e253, 2017. 1
[22] Shane Legg, Marcus Hutter, et al. A collection of definitions of intelligence. Frontiers in Artificial
_Intelligence and applications, 157:17, 2007. 1_
[23] Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training
with frozen image encoders and large language models. In International conference on machine learning,
pp. 19730–19742. PMLR, 2023. 6
[24] Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi
Wang, Conghui He, Ziwei Liu, et al. Mmbench: Is your multi-modal model an all-around player? arXiv
_preprint arXiv:2307.06281, 2023. 3_
[25] Yuliang Liu, Zhang Li, Hongliang Li, Wenwen Yu, Mingxin Huang, Dezhi Peng, Mingyu Liu, Mingrui
Chen, Chunyuan Li, Lianwen Jin, et al. On the hidden mystery of ocr in large multimodal models. arXiv
_preprint arXiv:2305.07895, 2023. 3_
[26] Frederic M. Lord and Melvin R. Novick. Statistical Theories of Mental Test Scores. Addison-Wesley
Publishing Company, 1968. 7
[27] Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter
Clark, and Ashwin Kalyan. Learn to explain: Multimodal reasoning via thought chains for science question
answering. Advances in Neural Information Processing Systems, 35:2507–2521, 2022. 3
[28] Pan Lu, Liang Qiu, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Tanmay Rajpurohit, Peter Clark, and
Ashwin Kalyan. Dynamic prompt learning via policy gradient for semi-structured mathematical reasoning.
_arXiv preprint arXiv:2209.14610, 2022. 4_
[29] Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei
Chang, Michel Galley, and Jianfeng Gao. MathVista: Evaluating mathematical reasoning of foundation
models in visual contexts. In Proceedings of the International Conference on Learning Representations,
2024. 2, 3
[30] Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, and
Jianfeng Gao. Chameleon: Plug-and-play compositional reasoning with large language models. Advances
_in Neural Information Processing Systems, 36, 2024. 4_
[31] Elisabet Mellroth. High achiever! Always a high achiever?: A comparison of student achievements on
_mathematical tests with different aims and goals. PhD thesis, Karlstads universitet, 2014. 4_
[32] Elisabet Mellroth. Problem solving competency and the mathematical kangaroo. In CERME 9, pp.
1095–1096, 2015. 4
[33] Marvin Minsky. Society of mind. Simon and Schuster, 1988. 1
[[34] OpenAI. GPT-4. https://www.openai.com/research/gpt-4, 2023. Accessed: 2024-05-29. 2, 5](https://www.openai.com/research/gpt-4)
[35] W. James Popham. Classroom Assessment: What Teachers Need to Know. Pearson Education, 2010. 7
[36] Salesforce AI Research. xgen-mm-phi3-mini-instruct model card, May 2024. [URL https://](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1)
```
huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-r-v1. 2, 6
```
[37] Amanpreet Singh, Vivek Natarjan, Meet Shah, Yu Jiang, Xinlei Chen, Devi Parikh, and Marcus Rohrbach.
Towards vqa models that can read. In Proceedings of the IEEE Conference on Computer Vision and Pattern
_Recognition, pp. 8317–8326, 2019. 3_
[38] Dídac Surís, Sachit Menon, and Carl Vondrick. Vipergpt: Visual inference via python execution for
reasoning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11888–
11898, 2023. 4
[39] Trieu H Trinh, Yuhuai Wu, Quoc V Le, He He, and Thang Luong. Solving olympiad geometry without
human demonstrations. Nature, 625(7995):476–482, 2024. 4
-----
[40] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems,
2017. 1
[41] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama,
Maarten Bosma, Denny Zhou, Donald Metzler, et al. Emergent abilities of large language models. arXiv
_preprint arXiv:2206.07682, 2022. 1_
[42] Chenfei Wu, Shengming Yin, Weizhen Qi, Xiaodong Wang, Zecheng Tang, and Nan Duan. Visual chatgpt:
Talking, drawing and editing with visual foundation models. arXiv preprint arXiv:2303.04671, 2023. 4
[43] Xiang Yue, Yuansheng Ni, Kai Zhang, Tianyu Zheng, Ruoqi Liu, Ge Zhang, Samuel Stevens, Dongfu
Jiang, Weiming Ren, Yuxuan Sun, et al. Mmmu: A massive multi-discipline multimodal understanding
and reasoning benchmark for expert agi. arXiv preprint arXiv:2311.16502, 2023. 2, 3
**A** **Average Performance Scores**
As alluded to in the main paper, each examination consists of problems with weights, where the
simpler problems have a weight of 3, medium hard ones are 4 pointers and the difficult ones carry 5
points. In Table 4, we compute the average score of each LVLM over the five years. Interestingly, we
see that the scores for GPT-4o is higher than other models, suggesting it can solve higher-weighted
problems more often than other methods. However, the overall score is still below the maximum
score of the human.
Table 4: Normalized performance scores (%) received by various LVLMs averaged over the five years.
|Model Grade|1-2|3-4|5-6|7-8|9-10|11-12|
|---|---|---|---|---|---|---|
|GPT-4o GPT-4v Gemini-P Gemini-F Claude-3 XGen-MM|42.9 40.2 25.4 20.2 39.3 7.5|36.9 36.9 28.1 28.9 33.5 9.2|36.0 28.7 25.3 21.3 31.3 6.2|45.8 33.2 30.5 27.8 39.1 7.3|42.0 36.5 36.6 36.8 40.1 10.2|48.5 40.8 38.3 34.3 41.0 7.3|
The scores are obtained by multiplying each correct solution by its respective weight and dividing
by the maximum score for the respective competition (96 for 1–4 and 120 for 5–12), followed by
averaging over 5 years. Higher values indicate the model solved higher weighted problems more
frequently.
**B** **Details of Ranking and Percentiles (Table 3)**
In Table 3 of the main paper, we provided the test rank and the overall percentile scores for LVLMs.
Here we provide more details on how these rankings were computed. As noted above, each problem
in the test has a score associated with it and the final performance of a participant is computed by the
sum of the weights of all correctly answered questions. This amounts to a maximum of 96 points for
grades 1–4 and 120 points for 5–6. First, with data provided by MK, we found the USA ranks of
the participants for each respective score, and the corresponding percentiles among all the students
participated in that test that year. Note that MK had about 40,000 students overall for year 2024.
Next, we computed the score of each LVLM on the test, and we found the rank / percentile of LVLM
among the list of scores and their ranks against children. If a score of LVLM is not found in the list
of scores that children obtained, we took the closest maximum score. As can be seen from Table 3
(main paper), the LVLMs rank quite poor against children in the ranking.
**C** **Example Responses**
The full set of responses can be seen in the zip file provided. As alluded to in the main paper, there
are three types of outputs that an LVLM response can make towards our evaluation: i) the LVLM
answers with one of the five answer options and we can parse it correctly, ii) the LVLM produces an
answer which may or may not be correct, but we cannot parse the option automatically, and iii) the
LVLM decides not to provide an answer. For the case (ii) we use the letter ’K’ as the output of the
LVLM and that response is deemed incorrect. For the case of (iii), we use ’-1’ as the response and
-----
also consider that response as incorrect in our evaluation. There is also a fourth possibility, when
the LVLM predicts more than one answer as correct. We consider this output as ’K’ as well. In
Tables 5- 10 below, we provide several examples of LVLM responses to questions in SMART-840
dataset.
-----
**Question: The kangaroo goes up 3 steps each time the**
rabbit goes down 2 steps. On which step do they meet?
Options: A1: 3 B2: 4 C3: 5 D4: 6 E5: 7
**GPT-4o: Let’s analyze the movement of both the kangaroo and the rabbit step by step. 1. Initial**
positions: - Kangaroo is on step 1. - Rabbit is on step 10. 2. Movements per turn: - Kangaroo goes up
3 steps each time. - Rabbit goes down 2 steps each time. We need to determine the turn at which both
the kangaroo and the rabbit land on the same step. Let’s calculate their positions after each turn: |
Turn | Kangaroo’s Step | Rabbit’s Step | |——|—————–|—————| | 1 | 1 + 3 = 4 | 10 - 2 = 8 | | 2
| 4 + 3 = 7 | 8 - 2 = 6 | | 3 | 7 + 3 = 10 | 6 - 2 = 4 | | 4 | 10 + 3 = 13 | 4 - 2 = 2 | From the calculations, we
see that on the second turn, the kangaroo is on step 7 while the rabbit is on step 6. On the third turn,
the kangaroo is on step 10 while the rabbit is on step 4. Upon reviewing each turn: On the third turn:
The kangaroo will be on step 10. The rabbit will be on step 4. Upon reviewing again: On the third
turn: Let’s review in precise steps: - On the second turn, kangaroo is on step 7 and the rabbit is on
step 6. On the third turn, it incorrectly reviewed to 10 and 4 in previous calculation, upon reconsider
looking up: - Kangaroo going up 3-steps from 7, lands on 7+3=10 but rabbit going again 8 - goes
down on step 4. Realizing correction upto, Trying other will remain, looks properly at: A halftime
view seeing - it clearly at steps step are correctingly. Therefore, neatly reviewing: Kangaroo-steps
reviewed Till 5 reduces. They meet on a reshuffling step precisely at step: again adequate, Review
near Kangaroo-7 meets, So shutting more review at neatest finally seeing, likely - Therefore, the
correct answer is step 7, rightly upon reviewing clearly at. So E. Option: E5 .
**GPT-4v: To solve the problem, we need to consider the movement of both the kangaroo and the**
rabbit. The kangaroo moves up 3 steps each time, while the rabbit moves down 2 steps each time.
They need to reach the same step at some point, so let’s check step by step where they might meet.
Starting from step 1 (where the kangaroo is) and step 10 (where the rabbit is), we will calculate their
respective movements until they meet: Kangaroo’s steps: 1, 4, 7, 10... Rabbit’s steps: 10, 8, 6, 4...
The kangaroo and the rabbit will meet on a step number that appears in both sequences. Looking
above, we can see that the number 4 appears in both sequences. The kangaroo reaches step 4 after
moving up 3 steps once from step 1, and the rabbit reaches step 4 after moving down 2 steps three
times from step 10. Thus, they meet on step 4. The answer is B2: 4.
**Gemini-Pro v1.5: **Understand the pattern:** The kangaroo moves up 3 steps while the rabbit**
moves down 2 steps. This means their relative speed is 3 + 2 = 5 steps per unit of time. 2. **Find the
least common multiple:** We need to find the least common multiple (LCM) of 3 (kangaroo’s steps)
and 2 (rabbit’s steps), which is 6. This means that every 6 steps, they will align in their movement
pattern. 3. **Determine the meeting point:** - The kangaroo will have gone up 6 steps (2 sets of
3 steps). - The rabbit will have gone down 6 steps (3 sets of 2 steps). 4. **They meet on step 6.**
**Therefore, the answer is D4.**
**Gemini-Flash v1.5: Here’s how we can solve this problem: **Understanding the Problem:** * The**
kangaroo goes up 3 steps at a time. * The rabbit goes down 2 steps at a time. * We need to find the
step number where they meet. **Setting Up the Problem:** Let’s assume the kangaroo starts at step
0 and the rabbit starts at step ’x’. * **Kangaroo’s position:** After ’n’ jumps, the kangaroo will be at
step 3n. * **Rabbit’s position:** After ’n’ jumps, the rabbit will be at step x - 2n. **Finding the
Meeting Point:** The kangaroo and rabbit meet when their positions are the same. We can set up an
equation: 3n = x - 2n **Solving the Equation:** 1. **Combine ’n’ terms:** 5n = x 2. **We need
to find a value of ’x’ (the rabbit’s starting step) that is a multiple of 5.** **Looking at the Answer
Options:** The only answer option that is a multiple of 5 is **C3: 5**. **Explanation:** If the
rabbit starts at step 5, the kangaroo will meet the rabbit on the 5th step (3 jumps * 1 jump = 3 steps,
and the rabbit will be at step 5 - 2 jumps = 1 step). **Therefore, the answer is C3: 5**
**Claude-3-Opus: Let’s solve this step-by-step: 1) The kangaroo starts at step 1 and goes up 3 steps at**
a time. So its positions will be: Step 1 → Step 4 → Step 7 → Step 10 2) The rabbit starts at step 10
and goes down 2 steps at a time. So its positions will be: Step 10 → Step 8 → Step 6 → Step 4 →
Step 2 3) Comparing the positions, we see that the kangaroo and rabbit will both be on Step 4 at some
point. 4) Therefore, they meet on Step 4. B2: 4
**XGen-MM: The kangaroo and the rabbit are on opposite sides of the staircase, with the kangaroo**
on the left and the rabbit on the right. The kangaroo is on step 1 and the rabbit is on step 10. The
kangaroo goes up 3 steps each time the rabbit goes down 2 steps. To meet, the kangaroo needs to go
up 3 steps for every 2 steps the rabbit goes down. The kangaroo is currently on step 1, so it needs to
go up 3 steps to meet the rabbit. The rabbit is currently on step 10, so it needs to go down 2 steps to
meet the kangaroo. Therefore, the kangaroo and the rabbit will meet on step 4.
Table 5: Puzzle #1 for 1-2 graders 2020. Correct answer is D.
-----
**Question: Which key would it be impossible to cut into**
three different figures of five shaded squares? Options:
A1: A B2: B C3: C D4: D E5: E
**GPT-4o: To determine which key it would be impossible to cut into three different figures of five**
shaded squares, we need to carefully look at the shape and try to find groups of five connected squares.
Let’s analyze each one. **Key (A):** - It is possible to divide this shape into three groups each
containing five connected shaded squares: 1. Top left block (4 connected, 1 extra) 2. Vertical middle
section (1 extra + 4 connected) 3. Bottom right block (5 connected) Thus, Key (A) can be divided
accordingly. **Key (B):** - Let’s inspect possible groupings of five shaded squares: 1. Top left block
(5 connected) 2. Middle horizontal block (5 connected) 3. Right bottom (5 connected) Thus, Key (B)
can be divided accordingly. **Key (C):** - It’s more challenging to break it down into 3 groups of 5
shaded squares each due to its layout. 1. Top left block (4 connected) 2. Middle vertical block (5
connected) 3. Bottom right block (5 connected) Thus, Key (C) can be divided accordingly. **Key
(D):** - It’s more challenging to break it down into 3 groups of 5 shaded squares each due to its
layout: 1. Top section (5 connected) 2. Middle vertical block (5 connected) 3. Bottom section (5
connected) Thus, Key (D) can be divided accordingly. **Key (E):** - Let’s verify finding groups
of 5 shaded squares: 1. Top left block (4 connected) 2. Next L-shape (3 connected) 3. Vertical
section (5 connected) Key (E) has no viable configuration of three different five shaded square groups.
Considering the analysis, Option E fits the criteria of being impossible to cut into three different
figures of five shaded squares. Therefore, the answer is **E5**.
**Gemini-Pro v1.5: Let’s analyze each key and see if it’s possible to cut it into three figures of five**
shaded squares: * **Key A:** Can be easily divided horizontally into three sets of 5 shaded squares. *
**Key B:** Can be divided into three pentominoes by cutting along the diagonal and then separating
the top two pieces with a vertical cut. * **Key C:** Can be divided into three pentominoes as
shown: [Image of Key C divided into three pentominoes] * **Key D:** Can be divided into three
pentominoes by cutting it horizontally. * **Key E:** No matter how you try to divide it, you’ll
always end up with at least one piece that doesn’t have exactly five shaded squares. This is because
the key has 16 shaded squares, and 16 is not divisible by 5. **Therefore, the answer is E5.**
**Claude-3-Opus: Let’s analyze each key to see if it can be cut into three different figures of five**
shaded squares: (A) From the 7 squares, we can form an L-shape (3 squares), a T-shape (4 squares)
and a 2x2 square (4 squares), each using 5 shaded squares with some overlap. So (A) is possible.
(B) The 9 squares can be split into a 2x2 square (4 squares), a 3x2 rectangle (6 squares), and an
L-shape (3 squares), each using 5 shaded squares with some overlap. So (B) is possible. (C) The 5
squares cannot be split into three different figures of 5 shaded squares each, as there are not enough
squares. So (C) is impossible. (D) The 8 squares can form a 2x2 square (4 squares), a 3x2 rectangle
(6 squares), and a T-shape (4 squares), each using 5 shaded squares with some overlap. So (D) is
possible. (E) The 10 squares can be split into a 2x2 square (4 squares), a 3x2 rectangle (6 squares),
and a cross shape (5 squares), each using 5 shaded squares with some overlap. So (E) is possible.
Therefore, the only key that cannot be cut into three different figures of five shaded squares is (C). C3
**XGen-MM: The image shows five different figures, each composed of a series of rectangles and**
squares. To determine which figure is impossible to cut into three different figures of five shaded
squares, we need to analyze the structure of each figure. A) The figure is composed of a series of
rectangles and squares, and it is possible to cut it into three different figures of five shaded squares.
B) The figure is composed of a series of rectangles and squares, and it is possible to cut it into three
different figures of five shaded squares. C) The figure is composed of a series of rectangles and
squares, and it is possible to cut it into three different figures of five shaded squares. D) The figure is
composed of a series of rectangles and squares, and it is possible to cut it into three different figures
of five shaded squares. E) The figure is composed of a series of rectangles and squares, and it is
possible to cut it into three different figures of five shaded squares. Based on the analysis, all figures
can be cut into three different figures of five shaded squares. Therefore, the answer is not possible,
and there is no correct option to choose.
Table 6: Puzzle #23 for 3-4 graders 2020. Correct answer is B.
-----
**Question: The figure shows the plan of the seven train**
routes of a small town. The circles indicate the stations.
Martin wants to paint the lines in such a way that if two
lines share a common station, then they are painted with
different colors. What is the smallest number of colors
that he can use? Options: A1: 3 B2: 4 C3: 5 D4: 6 E5: 7
**GPT-4o: To solve this problem, we must determine the minimum number of colors required to color**
the train routes such that no two routes sharing a station have the same color. This is essentially
a graph coloring problem, where we need to find the chromatic number of the graph. Firstly, let’s
identify the individual routes labeled from 1 to 7: - Route 1: Connects stations along the top-most
horizontal line. - Route 2: Connects stations along the second horizontal line from the top. - Route 3:
Connects stations along the third horizontal line from the top. - Route 4: Connects stations along
the bottom-most horizontal line. - Route 5: Connects stations along the vertical line intercepting
the middle horizontally. - Route 6: Connects stations along the vertical line intercepting the second
horizontal line from the left. - Route 7: Connects stations uniquely as a distinct route. The following
step-by-step will help us identify the minimum number of colors required: 1. **Construct a Graph
Representation**: - Vertices represent different train routes. - An edge connects two vertices if the
corresponding routes share at least one station. 2. **Identify Shared Stations**: - Route 1 shares
stations with Route 5. - Route 2 shares stations with Route 5 and Route 6. - Route 3 shares stations
with Route 5 and Route 6. - Route 4 shares stations with Route 5. - Route 5 shares stations with
Routes 1, 2, 3, and 4. - Route 6 shares stations with Routes 2, 3, and 7. - Route 7 shares stations
with Route 6. 3. **Graph Coloring**: - We can determine the minimum number of different colors
needed by trying to color each route while ensuring no two adjacent routes share the same color. Given our routes and their intersections: “‘ - Color Route 1 with Color A. - Color Route 2 with Color
B. - Color Route 3 with Color C. - Color Route 4 with Color A. - Color Route 5 with Color D (since
it’s adjacent to Routes 1, 2, 3, and 4, which are colored A, B, C, and A). - Color Route 6 with Color E
(since it’s adjacent to Routes 2 and 3, which are colored B and C, and Route 5 colored D). - Color
Route 7 with Color B (different from Route 6 colored E). “‘ 4. **Verification**: - Each route shares
its intersections appropriately and does not violate the coloring rule where intersecting routes have
the same color. Based on these steps, four colors are sufficient to satisfy the requirements. Therefore,
the minimum number of colors Martin can use to ensure no two train routes sharing a station are
painted with the same color is 4. The correct answer is: **B2: 4**.
**Gemini-Pro v1.5: 1. **Identify the most connected station:** The station in the center of the figure**
is connected to all six other stations. 2. **Assign the first color:** This central station needs its own
unique color since it shares a route with every other station. Let’s call this color 1. 3. **Assign the
second color:** Now, look at the outer stations. Each outer station is connected to the central station
and two other outer stations. Therefore, we can use a second color (color 2) for alternating outer
stations. 4. **Assign the third color:** The remaining outer stations, which are not connected to each
other, can share a third color (color 3). 5. **Verify:** We have now colored all the lines using only
three colors, and no lines sharing a station have the same color. **Therefore, the smallest number of
colors Martin can use is 3.** **Answer: A1**
**Claude-3-Opus: To determine the minimum number of colors needed, we can approach this as a**
graph coloring problem. The stations are the vertices and the train routes are the edges connecting
them. We need to assign colors to the vertices such that no two adjacent vertices have the same color.
Step 1: Identify the vertex with the highest degree (most connected routes). In this case, station 7 has
the highest degree with 4 routes connecting to it. Step 2: Assign the first color to station 7. Then,
assign different colors to its adjacent stations (2, 5, 6, and 7). Step 3: Move to the next uncolored
station with the highest degree, which is station 5 with 3 connections. It already has a color assigned
from step 2. Assign different colors to its uncolored adjacent stations (1 and 3). Step 4: Repeat the
process for the remaining uncolored stations. Station 6 is already colored. Station 4 can use the same
color as station 2 since they are not adjacent. Station 1 is already colored. Stations 2 and 3 are already
colored. Therefore, the minimum number of colors required to ensure that no two lines sharing a
common station have the same color is 3. A1: 3
**XGen-MM: To solve this problem, we need to find the minimum number of colors required to**
paint the lines such that no two lines sharing a common station have the same color. Let’s start by
identifying the stations that are shared by multiple lines. We can see that stations 1, 2, 3, 4, 5, 6, and 7
are all shared by multiple lines. Next, we can group the lines based on the stations they share. We can
see that lines 1, 2, and 3 all share station 1, so we can paint these lines with a single color. Similarly,
lines 4, 5, and 6 all share station 4, so we can paint these lines with a single color. Finally, lines 7 and
6 share station 7, so we can paint these lines with a single color. Therefore, we need a minimum of 3
colors to paint the lines such that no two lines sharing a common station have the same color The
-----
**Question: An isosceles triangle ABC, with AB = AC,**
is split into three smaller isosceles triangles, as shown,
so that AD= DB, CE= CD, and BE= EC. (Note that the
diagram is not drawn to scale.) What is the size, in degrees,
of angle BAC? Options: A1: 24 B2: 28 C3: 30 D4: 35
E5: 36
**GPT-4o: To solve for the measure of ∠BAC in the isosceles triangle ABC with given conditions,**
follow these steps: 1. Since AB = AC, triangle ABC is isosceles, which implies that ∠CAB = ∠ACB.
2. The point D divides the AB side in half, so AD = DB. Also, CD = CE and EC = EB as provided.
Let’s denote: - ∠BAC = 2α (since AB = AC, so ∠BAC is twice the angle we are trying to find) ∠ABC = ∠ACB = β (since triangle ABC is isosceles) From the triangle properties, we can establish:
[ 2α + 2β = 180 degrees ] [ α + β = 90 degrees ] Since AD = DB and BE = EC, and noting that
the triangles are split symmetrically with BE and EC being equal, triangles ADE and CED are also
isosceles and congruent. This gives more relations among the inner angles: - Let ∠ADE = ∠EDA
= x - Let ∠CDE = ∠CED = y Given each triangle is isosceles, sum of angles in triangle ADE and
CDE: [ 2x + α = 180 degrees → x + y = 90 degrees (since each triangle contributes equally to the
full exterior angles, split evenly by E’s position precisely) ] Simplifying recursively each geometrical
condition: Since the combined triangles relate symmetrically: [ α + β + α = 180 degrees (sum of
angles in triangle ABC) ] Finally, evaluating that with each resulting equiangular split: [ Place these
all together as x being central part: ] Since our primary goal was solving ∠BAC: [ hence 3 equals
equal relations to α angle, comprise 360/2 equal all middle angles proportions Thus diameter alpha
integrations place: ([α based yields splits ratio as, governing full comprising equal contributors) Fully
unifying solved combinatory resultant is angle triangle ( ∠BAC = 36 degrees Option E5 solved
value : Result conclusion, E5: 36.
**Gemini-Pro v1.5: Let’s denote the measure of angle BAC as x. **Step 1: Analyze the smaller**
triangles.** * **Triangle ADB:** Since AD = DB, this triangle is isosceles. Therefore, ∠ ABD
= ∠ BAD. Let’s call this angle ’y’. * **Triangle BCE:** Similarly, with BE = EC, this triangle is
isosceles, and ∠ CBE = ∠ BCE. Let’s call this angle ’z’. * **Triangle CDE:** Again, with CE =
CD, this triangle is isosceles, and ∠ CED = ∠ CDE. Since ∠ CED is an exterior angle to triangle
BCE, ∠ CED = ∠ CBE + ∠ BCE = z + z = 2z. **Step 2: Relate the angles in triangle ABC.** *
The angles of any triangle add up to 180 degrees. Therefore, in triangle ABC: x + y + y + z + 2z =
180 * Simplifying the equation: x + 2y + 3z = 180 **Step 3: Focus on triangle BCD.** * In triangle
BCD: y + z + 2z = 180 * Simplifying: y + 3z = 180 **Step 4: Solve for x.** * Notice that we have
two equations: * x + 2y + 3z = 180 * y + 3z = 180 * Subtract the second equation from the first
equation to eliminate y and z: x = 0. **Step 5: Address the contradiction.** * We arrived at x = 0,
which is impossible for an angle in a triangle. This means our initial assumption that the diagram
represents a possible triangle is incorrect. **Conclusion:** The diagram is misleading because it’s
not drawn to scale. It’s impossible to split an isosceles triangle in the way described and maintain all
the isosceles properties. Therefore, there’s no solution for the measure of angle BAC. **None of the
answer options are correct.**
**Claude-3-Opus: Let’s solve this step-by-step: 1) Since ABC is an isosceles triangle with AB = AC,**
∠ BAC = ∠ BCA. Let’s denote this angle as x°. 2) In triangle ADB, since AD = DB, angle BAD = ∠
ABD. Let’s denote this angle as y°. Similarly, in triangle CDE, ∠ DCE = ∠ CED = y°. 3) In triangle
BEC, ∠ CBE = ∠ BCE. Let’s denote this angle as z°. 4) In any triangle, the sum of the angles is 180°.
Applying this to triangle ABC: x° + x° + (2y° + 2z°) = 180° 2x° + 2y° + 2z° = 180° x° + y° + z° =
90° 5) In an isosceles triangle, the angles opposite the equal sides are equal. So in triangle ADB, 2y°
+ z° = 180°. Similarly, in triangle CDE, 2y° + z° = 180°. 6) From step 5, we can deduce that z° =
180° - 2y°. 7) Substituting this into the equation from step 4: x° + y° + (180° - 2y°) = 90° x° + 180° y° = 90° x° - y° = -90° 8) In triangle BEC, 2z° + y° = 180°. Substituting z° from step 6: 2(180° - 2y°)
+ y° = 180° 360° - 4y° + y° = 180° 360° - 3y° = 180° -3y° = -180° y° = 60° 9) Substituting y° = 60°
into the equation from step 7: x° - 60° = -90° x° = 30° Therefore, ∠ BAC is 30°. Answer: C3: 30
**XGen-MM:The given isosceles triangle ABC is split into three smaller isosceles triangles, with AD**
= DB, CE = CD, and BE = EC. Since AB = AC, we can conclude that the angles at A and C are
equal. Let’s denote the angle at A as 2x and the angle at C as x. In the larger triangle, the sum of the
angles is 180 degrees. Therefore, we have: 2x + x + 2x = 180 Combining like terms, we get: 4x =
180 Dividing both sides by 4, we find: x = 45 Since the angle at C is 45 degrees, and the angle at A is
-----
**Question: The figure shows a semicircle with center 0.**
Two of the angles are given. What is the size, in degrees,
of the angle α? Options: A1: 9° B2: 11° C3: 16° D4:
17.5° E5: 18°
**GPT-4o: To solve for the angle α in the given semicircle, we need to use the properties of circles**
and angles. Let’s identify the key points and steps: 1. **Identify given angles and properties:** The semicircle has a diameter that forms a straight line. - The angle close to the left angle of the
semicircle is 32°. - The angle close to the right angle near the circumference is 67°. - The sum of
angles in a triangle is 180°. 2. **Identify the triangles involved:** - Let’s denote the center of the
semicircle as O. - Denote the intersection points along the circumference as A (left endpoint) and B
(right endpoint), and the point where α is the vertex as C. 3. **Calculate angles in triangle OBC:** Since △ OBC lies on the diameter of the semicircle, ∠ OBC = 90° (angle subtended by the diameter).
4. **Calculate remaining angle:** - In △ OBC : ∠ BOC = 67° - Since angles in a triangle sum to
180°: ∠ OCB = 180° - 90° - 67° = 23° 5. **Consider isosceles triangle properties:** - Note that
_△_ AOC will be isosceles because OA = OC (both are radii of the circle). 6. **Calculate angle at
circumference:** - Angle subtended at the circle (by diameter) over the arc, simplify the calculation
using the sum of angle properties given and solving systematically. Given the semicircle, combining
all properties and laws leads us to find: α = 17.5° Thus, the size of the angle α is: - D4: 17.5°
**Gemini-Pro v1.5: Here’s how to solve the problem: * **Angles in a Triangle:** The angles inside**
any triangle always add up to 180°. * **Angles on a Straight Line:** The angles on a straight line
always add up to 180°. **Let’s apply these rules:** 1. **Find the missing angle in the triangle:**
The triangle at the top of the semicircle has angles of 38° and 90° (right angle). Therefore, the missing
angle is 180° - 90° - 38° = 52°. 2. **Find the angle next to α:** This angle and the 52° angle form a
straight line. So, the angle next to α is 180° - 52° = 128°. 3. **Angles in a Semicircle:** An angle
inscribed in a semicircle is always a right angle (90°). 4. **Solve for α:** The angles inside the
triangle containing α add up to 180°. We have 90°, 128°, and α. Therefore, α = 180° - 90° - 128° =
-38°. Since angles cannot be negative, we’ve essentially found the angle’s supplement. To get the
actual value of α, subtract from 180°: 180° - 38° = 142°. 5. **Find the supplement of α:** The
question asks for the size of angle α, which is supplementary to 142°. Therefore, α = 180° - 142° =
38°. **Answer: α = 38°**
**Claude-3-Opus: Let’s solve this step-by-step: 1)To find the size of angle α, we can follow these**
steps: 1. In a semicircle, the angle formed by a line from the center to any point on the semicircle
and the diameter is a right angle (90°). This means ∠OAB = 90°. 2. The sum of angles in a triangle
is 180°. In triangle OAB, we have ∠OAB = 90°, ∠OBA = 32°, so the remaining angle ∠AOB =
180° - 90° - 32° = 58°. 3. The angle at the center of a circle is twice the angle at the circumference
subtended by the same arc. Here, ∠AOB is the central angle, and α is the angle at the circumference
subtended by the same arc AB. So, ∠AOB = 2α. 4. Since ∠AOB = 58°, we have: 58° = 2α α = 58° ÷
2 = 29° 5. In triangle AOC, we know ∠OAC = 90° (from step 1) and ∠ACO = 67°. So the remaining
angle ∠AOC = 180° - 90° - 67° = 23°. 6. Angle α is the difference between ∠AOB and ∠AOC: α=
∠AOB - ∠AOC = 29° - 23° = 6° Therefore, the size of angle α is 6°, which is not one of the given
options. The closest option is A1: 9°. A1
**XGen-MM:To find the size of the angle α, we can use the fact that the sum of the angles in a triangle**
is 180 degrees. In this case, we have a triangle with angles 32°, α, and 67°. We can set up the
equation: 32 + α + 67 = 180 Solving for α, we get: α = 180 - 32 - 67 = 80 Therefore, the size of the
angle α is 80 degrees. The correct answer is not listed among the options provided.
Table 9: Puzzle #15 for 9-10 graders 2021. Correct answer is A.
-----
**Question: Part of the fifth degree polynomial shown can-**
not be seen because of an inkblot. It is known that all five
roots of the polynomial are integers. What is the highest
power of x − 1 that divides the polynomial? Options: A1:
(x − 1)[1] B2: (x − 1)[2] C3: (x − 1)[3] D4: (x − 1)[4] E5:
(x − 1)[5]
**GPT-4o: Let’s analyze the given polynomial P(x): P** (x) = x[5] _−_ 11x[4]+(inkblot)−7 We know
that all five roots of the polynomial are integers. Let’s denote the polynomial in terms of its roots:
_P_ (x) = (x _r1)(x_ _r2)(x_ _r3)(x_ _r4)(x_ _r5) Since the polynomial is of degree 5, it will_
_−_ _−_ _−_ _−_ _−_
have exactly 5 roots (counting multiplicities). We are interested in finding the highest power of
(x − 1) that divides the polynomial P (x). This means we need to determine the multiplicity of
the root x = 1. Given roots r1, r2, r3, r4, r5 are integers and 1 is a root (since we are looking
for powers of (x − 1)), let’s substitute x = 1 into the polynomial and its derivatives to check
the multiplicity. 1. P (1) : P (1) = 1[5] _−_ 11(1[4]) + (inkblot) - 7 = 1 - 11 + (inkblot) - 7 = -17
+ (inkblot). Since P (1) = 0, this implies that the terms concealed by the inkblot sum to 17. 2.
_P_ _[′](1) : P_ _[′](x) = 5x[4]_ _−_ 44x[3] + (inkblot in the derivative term), P _[′](1) = 5(1)[4]_ _−_ 44(1)[3]+ (inkblot
in the derivative term at x = 1)= 5 - 44 + (inkblot derivative term)= -39 + (inkblot derivative term).
For P _[′](1) = 0, the derivative term must correct the -39 to zero, implying the sum of the derivative_
components handled by the inkblot term is 39. Given the conditions and polynomial degrees, It’s
logically safe to deduce (x − 1) is a root at least twice because both P (1) = 0 and P _[′](1) = 0._
Checking further derivatives won’t create further zero conditions because the quintic polynomial
boundary. Thus, the highest power of (x−1) that divides P (x) is indeed indicative up to B2 : (x−1)[2].
So, the answer is B2.
**Gemini-Pro v1.5: The graph of the polynomial shows an x-intercept at x=1. **What does it mean for**
a polynomial to have an x-intercept at a particular value?** It means that the value of the polynomial
is zero at that x-value. In other words, x=1 is a root of the polynomial. **The problem states that
all roots are integers. What does this tell us about the factors of the polynomial?** Since x=1 is a
root, (x-1) must be a factor of the polynomial. Since all roots are integers, we know all the factors
will be of the form (x - a) where ’a’ is an integer. **We need to find the highest power of (x-1)
that divides the polynomial. How do we do that?** We can look at the behavior of the graph at
x=1. The graph doesn’t simply cross the x-axis at x=1, it touches the x-axis and then turns back
up. This indicates that (x-1) is a factor multiple times (i.e., it has a multiplicity greater than 1).
**Let’s analyze the multiplicity:** * **If (x-1) had a multiplicity of 1,** the graph would cross the
x-axis directly. * **The graph’s behavior suggests a multiplicity of at least 2.** Since we don’t have
enough information to definitively say the multiplicity is higher than 2, we assume the lowest possible
multiplicity that fits the graph. **Therefore, the highest power of (x - 1) that divides the polynomial
is (x − 1)[2].** **Answer: B2**
**Claude-3-Opus: Let’s break this down step-by-step: 1) The polynomial has 5 integer roots. This**
means (x−1) is a factor of the polynomial 5 times. 2) The polynomial can be written as (x−1)[5] _∗g(x)_
where g(x) is some other polynomial. 3) At x = 1, the polynomial equals -7. Substituting x = 1
into the equation: 1[5] _−_ 11 ∗ 1[4] + [inkblot] − 7 = −7. 4) Simplifying: 1 − 11 + [inkblot] − 7 =
_−7[inkblot] = −7+11−1+7 = 105). So the full polynomial is: x[5]_ _−11x[4]_ +10x[3] _−76). Comparing_
coefficients with the expanded form of (x _−_ 1)[5] _∗_ _g(x) : x[5]_ _−_ 5x[4] +10x[3] _−_ 10x[2] +5x _−_ 1 = (x _−_ 1)[5].
The given polynomial matches this up to the x[3] term. The x[2], x and constant terms are hidden by the
inkblot. 7) Therefore, (x − 1)[5] divides the polynomial. E5: (x − 1)[5]
**XGen-MM:The polynomial is x[5]** _−_ 11x[4] + 7x − 7. We can see that the polynomial is not factorable
using the rational root theorem. Therefore, the highest power of x − 1 that divides the polynomial is
(x − 1)[1]. The answer is A1.
Table 10: Puzzle #25 for 11-12 graders 2023. Correct answer is D.
-----
| [
"Anoop, Cherian",
"Kevin, Smith",
"Kuan-Chuan, Peng",
"Suhas, Lohit",
"Joanna, Matthiesen",
"Joshua B., Tenenbaum"
] | 2024-06-22T00:00:00 | NeurIPS 2024 | true | 0 | 0 | null | http://arxiv.org/abs/2406.15736 | https://arxiv.org/abs/2406.15736 | https://www.semanticscholar.org/paper/95984b7e30093a4cdadfd04d49fc3803b291b1f5 |